 I actually flew in this thing once, a long time ago. It was very expensive, but it was an experience. I'm really excited about technology. I've been, for a long time, talking about the future of technology. I used to be in the music business as a musician and producer. And then in the 90s, I went on the internet. I did about a dozen startups in digital music. One of them was trying to do what today is Spotify. And we were just slightly early. That was about 2001. And we spent $25 million finding out that we were too early. But it was a good lesson. And so the last 15 years, I've been speaking about the future. And it's interesting. About three years ago, four years ago, I started to get a lot of pushback from people, saying, you know, grad technology is great. You know, all this technology all the time. Everybody's a technologist. Everybody's a data scientist. But what's going to happen to people? When technology is everything. When technology actually is creating people. So it inspired me to write this book, Technology versus Humanity. And it was funny, you know, when I started working on the book, it was actually 2016, my publisher said, you can't call this technology and humanity. That was my title, right? The publisher and their wisdom later on fired them. But in their wisdom, they decided, you know, it should be versus humanity. And I'm like, no, no, no, it's not true, it's and humanity. So that's become my key topic. That's what I want to talk to you guys about today. As a futurist, I'm actually not involved with predictions. You know, I do observations. I work on foresight, intuition, imagination. There are futurists who are great at predicting, Ray Kurzweil, Alvin Tofflo, of course, Arthur C. Clark, you know, those are the Jimmy Hendrix of futurism. That's not really me. But I try to observe and create some foresight. So I'm going to share some of those with you today. Most importantly, I think right off the top, my view really is that technology is not what we seek, but it's how we seek. Technology is a tool, a very, very powerful tool. And technology can always be used for good or bad things. The television, people are addicted to television, right? Now they're addicted to Facebook, right? Technology is morally neutral until we use it. William Gibson, one of my favorite science fiction authors. That's something to keep in mind when we talk about technology. It's not about saying yes or no, it's good or bad. There's no such thing. Technology just is, and of course, we create it. So we have to think about our context. Hence the title of my talk the next 10 years. Let's start with this. It's hard to imagine, right? But if you look back 10 years ago, our world was a little bit different. We didn't have free phone calls like we have today. We didn't have a music service in the cloud, apart from BitTorrent, of course. We didn't have a lot of things that we have today. We didn't really have autonomous driving. But our world isn't that much different compared to 10 years. It's quite different, but not that much. Now, think about the world in 20 years. I mean, just look at all the stuff you already know today. Quantum computing, right? A machine with a million times the computing power. 3D qubit computing. 5G, 7G, 10G, unlimited connectivity. 10 billion people on the internet by 2030. I mean, our world is going to be so different in 20 years, it's hard to even find science fiction about it. I mean, the kids of my kids will live to be an average of 100 years old. They will never know how to drive a car because they're just commanded with their voice. I mean, truly, that's huge difference. And the biggest difference really is this, that the machines that we use today, like our smartphones, you know, these machines are your second brain. For some of us, it's actually the first brain, because we do whatever it says. But you know, it's like our dating is in here, our banking is in here, our email is in here, our content, our news, our DNA very soon. Everything is going to be in here, right? And then this moves here. That's going to happen, right? And then if Elon Musk has his way directly here, the difference is in 20 years, we can change who we are as humans. That is the difference. You know, the steam engine didn't change what we are as humans. We just used it, right? And the internet didn't change the way that we think to some degree indirectly, but now can actually influence how we think. Look at social media, right? Social media has changed the way that we live, not just get news, parenthesis. I'll talk more about that shortly. So that's a vast difference. And this is what's underpinning the whole thing, yeah, humanity and technology is kind of converging. In technical terms, you know, we're still very early, most artificial intelligence machines like Sophia, the robot from Hansen Robotics, they're as dumb as a toaster compared to us, right? I mean, they have some sort of intelligence, but we're not quite there yet, you know, they're pretending to be intelligent. Like Google Maps, you know, if you live in a city, wherever that may be, you will always question Google Maps, right? Because you're driving and saying, I think they're saying, no, no, no, that can't be true. The machine is just stupid, right? But if you do that in a strange city, it's good. So machines are kind of intelligent, but this is gonna happen in the next 10 years. We're going to have machines that are actually matching us. We're gonna start having conversations with machines. We already do, except that you have to speak like an imbecile, you know? Like, hello, where is the restaurant, right? You know, okay. You definitely can't mix up any German English words like my own name, you know? Like Siri Cortana has never actually learned my name yet, G-E-R-D. That's like gastrointestinal reflex disease, same thing, right? It's not that hard to remember, right? Siri hasn't learned my own name yet. So, I mean, those things are still in the early stage, but you can say in 10 years, convergence. In 20 years, anything you want it to be, you can change your human genome, you can avoid diabetes, you can live longer, 20, 30, 50 years, but not 500 years, right? So, I would say that can be amazing, or it can be terrible, right? Because, you know, this is what technology is. We have to make it amazing, right? That's what it comes down to. So, this is a really, I mean, it's important message that I speak about all the time. The biggest transformation in the history of humanity. You know, we had big transformations, you know, printing press, right? Yeah, the steam engine, World War II, the bomb, the internet, the mobile phone, the iPad, right? Preventuses. But now we have this, right? All this stuff happening at the same time. Robotics, genome sequence, energy storage, I mean, 20 years, science fiction is becoming science fact. And if your scientists are gonna say, oh my God, you know, that's, you know, California euphemism, right? If you're an actual scientist, you know that not all of this is true, right? In terms of being exponential and so. But I mean, basically, we're seeing this happening right now. In five years, we're gonna be able to speak to machines as if there were people in 30 languages. We're gonna use what's apt to send voice messages in a hundred languages in real-time translation. That's all on the horizon. So, in the beginning, we basically augmented our muscles, you know, faster cars, airplanes, and so on. That's what technology did for us. And we're still doing that. Better airplanes, better vehicles, and so on. But now, we're gonna augment our intelligence. This is a whole different ball game. And I think it's good as long as it's not like our intelligence, because our intelligence is very complex, you know, social intelligence, emotional intelligence, musical intelligence, right? What kind of intelligence does a machine have? Computing intelligence, right? Processing. But unlimited. Does it compare to our intelligence? My dear intelligence, if we meet somewhere and we wouldn't know each other from the stage or being here, right? 0.4 seconds for one human to identify the other human without saying a single word. Are you good or bad? Are you a potential mate? Are you, whatever you are, 0.4 seconds, I gotta figure it out, right? And how is that? Well, that's called human intelligence. And of course, sometimes it's pretty stupid, of course, but still, that's what we have. So it's quite clear computers are binary for the time being until we have quantum computing. So zero ones, yes, no, yes, no. Humans are multinary. Which means any value at any given time I could always change. And I can always come to a decision. I'm actually not at all like this. I mean, when you think about your husband or your wife, you don't go back to the center of the brain and pull out the jetpack and say, ah-ha, that's her, right? We think about this, you know, when you think about something like San Francisco, immediately when I say that word, there's a million things in your mind what San Francisco is. And not all of that is data, like the size of the city or earthquakes or, you know, it's like this. Machines don't do that. Hence, I think it's a great pairing, right? We can never do what machines are doing as far as the logic and the facts is concerned. Today, we kind of can. Five years, 10 years, game over. Do not compete with the machines on logic and facts and knowledge. Yeah, that there's knowledge that is human, right? But knowledge of facts. I mean, you can speak to Abby and Watson today and you can ask Abby and Watson about the future of Switzerland. You get the facts in a nice voice, right? So what are we gonna do about this? We're building stuff like this, the Internet of Things. Very powerful tools. We can save 60% of energy here. We can invent entirely new things. And that's a huge global industry changing how we do it. At the same time, when we think about this, you know, we're building sort of a nervous system. We're building something that's almost like our own nervous system. When we do this, we're building this sort of meta-intelligence, right? We've got to be responsible. We can't just say, you know what? We're gonna build this just because we can. And then we worry about what it does later. That's what we did with gas and oil and coal, the fossil fuel industry. Let's worry about it. As long as we can drive the car, we're fine. We'll think about the rest later. We're building something that's going to have a trillions of connected devices. What are the rules? What about us? How can we survive as a human in this world? It could be amazing or it could be terrible. It could be like a total panopticon or it could be really liberating. So this is something I have to think about. I think, you know, what's called the Oppenheimer effect. You know, Oppenheimer was the guy who co-invented the nuclear bomb. He invented it because he didn't want the Germans to be first. He never thought the US government would use it. But of course, they had different plans. They used it twice and he felt like he had made it possible. He was really, really frustrated about this. Now when inventing all these things, we have to think about when we actually use it, what does it do? It's no longer enough to say, well, we're gonna invent anything we can because in the very near future, we can invent anything. I'll show you in a second what that means, but. Bottom line is this, right? It's hell then, hell and heaven. It could be heaven or it could be hell. Now we have to think about how do we make it heaven? How do we keep the magic without going to the manic, you know, the toxic? It's funny, the mobile phone is a great example. It's magic and then some people are obsessed. They get manic and we always have to do an update and other ones are toxic. Like you put your phone on the table when you have a conversation with your kids, changes the entire conversation. That's called poison, right? Poisoning our relationships. But just because it can be poisoned doesn't mean I want to throw it away, right? When I drink a beer for dinner, doesn't mean I'm free to drink a bottle of brandy for breakfast, you know? I have to find a way to differentiate. This guy is using an exoskeleton to learn how to walk his paraplegic. Took about two years of training and a million dollars. The guy could not move at all and now is using the exoskeleton to walk again. And I would think, yeah, that's an amazing use of technology. If I can get a single person in the universe or on the globe to be healed or prevented and not to get cancer, right? It's something we have to do. But on the flip side, we have this. This guy, what's his name again? You, her, right? He's propagating that we should have the right to change ourselves regardless of being an accident victim or not, right? We can actually say, my legs are not satisfying me. Let's get rid of them and buy other ones. To which I would say that's a little step too far because we're not talking about healing sick people here. We're talking about luxury sports saying I'm gonna rebuild my legs for two million pounds. What is the difference? Well, it's the same technology, right? What do you think is gonna happen with genetic engineering when it's possible to isolate the genome that's responsible for cancer or for diabetes? Is that gonna cost a million pounds? So the rich people live forever and we don't? I mean, those are issues that we have to look at. Elon Musk says that we need to upgrade ourselves so we can survive in the future. He says basically his argument is AI will be so powerful that if we're not also AI, right? So his project, the Neuralase is a brain-computer interface that will get us to connect directly to the internet, which is already being used today for fighter jet pilots and it's not totally new, but the idea of putting implants in your brain. He said the other day he'd be the first one to do it if it was allowed. Think about this for a second. In terms of science, yeah, I would say eventually that's probably possible, right? Is it a good idea? Is it something we should strive for? I mean, this would be the true merging of humans and machines, right? You could not get out of bed anymore because your device isn't working. That's like Black Mirror Times 200, right? So here's a question I have for you. In this digital world, there's two things happening. One is humans are linear. You know, we are improving, but we're not doubling like Moore's Law. Humans aren't exponential. We're still organic, but technology is following Moore's Law and Metcalfe's Law and other laws. Basically, technology has no limits. And this is our situation. We are linear, technology is exponential, nature is cyclical, right? Nature goes up, everybody dies, it goes down, it comes back, right? Dinosaurs, ISH. How are we going to survive in this world where technology is infinitely capable? How do we harness it? So technology is exponential, we are linear. This is a question I have for you. Should we upgrade or should we respect the difference? Who's for upgrading? Anybody for upgrading? Come on, it's fine. I'm just going to vote perfunctory here, right? Okay, okay. I think a little bit of an upgrade is fine. You know, if you take a cholesterol medication or a statin or you know, you have a, what you might call a cochle implant, right? That's kind of an upgrade. But it's a question of overall proportion, clearly. And of course, you know, the business of upgrading would be a giant business. Imagine if this actually works, you know? Who would not want to have it? So, is this our future? I mean, if you think about technology, it's like literally, sometimes I say jokingly, God as a service, you know, like software as a service. We're becoming like super human, right? Omniscient, omnipotent, omnipresent. I mean, that is the promise of Silicon Valley, right? Why should we not transcend humanity? Well, the answer is, you know, to me, I fear that this is more of a downgrade than an upgrade. Until I'm convinced otherwise, you know, I'd be very hesitant. I think certain upgrades, if I had an accident, I would probably want to get another arm if I had to, right? But that's different than ordering another arm, right? So this is something we have to think about which technology takes us, what we can actually do here, which way are we going? Because this question will really emerge, right? How do we retain our skills and our autonomy? If you live in Silicon Valley, people are saying, you know, what doesn't matter? Privacy, you know. Come on, get over it. You're like yesterday's guy, right? Autonomy, come on, autonomy. I mean, in this world, we don't have to be autonomous. We can be completely connected, right? To which I always say, no, if you don't have anything to hide, you're probably not human. I mean, think about this, you know, this is social media, right? Basically. That's what it has become, right? Now, you have a guy who is immersed in technology sleeping in the Tesla. This made the rounds on the internet a little while ago, right? That make me wonder, is this sort of a general thing to where we're like sleepwalking into technology and switching off? You know, in India, many marriages are still arranged, about 71%. Now there's an AI that does it. It's partner.ai or something like that. But it's basically, the family used to do that. And now they're using an AI to do it, to pick the ideal partner for you. That's what they have brokers and all that. But it turns out actually, which is a funny statistic, most of those marriages are happier than the non-arranged ones, which is also a very strange statistic, but debate for later. But here's the key question for us. I mean, ultimately, when we have this, right, machines are telling us what to do, right? Machines are saying, no, you know, he doesn't want to do that, right? Some of that is funny, but imagine if we lived in a world where that would happen at every turn, right? I mean, it would be unskilled, right? What's called the glass cockpit problem, you know, with the pilots where they can't fly anymore. So here's a short video that illustrates the point, hopefully in a funny way. Please, Jess. Playing jazz. Smoothie. Making smoothie. Calendar. No meetings today. Remember and taste at 9.30. Fire off. Fire off. Open door. Door open. And we're gonna do one more. All right. Open door. Wrong voice command. Open door. Wrong voice command. Open, open door. Repeat that. Open door. I didn't understand that. Hey, open door! Play on the floor. Ah, you get the point, you know. I spoke at the dentist convention yesterday, so I figured that was a good video, I don't remember. But, okay, this, I mean, I'm sure we've been through similar situations, right? So how do we find the limits, you know? How can we not have too much of a good thing become a very bad thing? Should we prohibit all bad things? You know, I think we would agree no. You know, smoking is bad, alcohol is bad, coffee is bad, you know, everything is bad in some way, right? Should we say no, you can't have it at all? That would be stupid, right? I mean, in Germany, a 12-year-old can get a beer, right? That's not legal, but it's possible, right? So how about technology? Should we say a 4-year-old can't use the iPad, you know? It has proven to be bad for kids. It has proven too much screen usage is literally bad for your brain as kids, right? It's pretty straightforward in terms of what the science says. But what do we do about this? There we have to look at the financial rewards, right? I mean, the Internet of Things and Artificial Intelligence deep learning, right? You see this graph over here. Basically, it shows quite clearly. Now, we're talking about a $30 trillion value chain that comes out of AI and deep learning and machine learning. This is the biggest gold rush in the history of humanity. I would say, jokingly, replacing humans is the biggest business opportunity. But then I would say, hey, if it's a great tool, I definitely want to have it. So who decides what the limits of that are? And of course, what we're seeing today, companies that do this are the richest companies in the world. The most powerful companies in the world are not the oil and gas companies or the banks or the military, right? They are these guys. The data companies, many of them are my clients. I'm quite familiar with the scenario there. Again, that's outrageous, right? So I'm saying, great. Obviously, we found something that works. And if you look at it in terms of investing, if you had invested in Facebook at the IPO, you would have made the most money in all digital stocks ever. But now you look at Facebook on the other side and you're saying, what has Facebook done that has not been as good as making money? Let's say the list goes on from here to the parking lot. So how do we decide? So here's what's happening with technology. This is something we have to ask, right? It's no longer this question about saying, OK, if this works, or how it works, or how much does it cost, and how much money does it make. That question is finishing in the next five to seven years. It's not about this. It's about this question. Why? Why are we doing this? And who is doing it? Because imagine in 10 years, pretty much, technology will be unlimitedly powerful. 20 years, it's hard to imagine. We can change the human genome, or you can program yourself. You can use intelligent machines to do all the work for you. What do we do? I mean, basically, the question is why and how and who? Not if. If you have kids, you've got to think about this. What are our kids going to do in this kind of world? Ginny Rometti from IBM says, society gives each of us a license to operate. It's a question of whether society trusts you or not. This is the only key question. That's the only, because it's a human question. Trust is not a download. Trust is something that we create. We can break trust and make mistakes. We fix it. But trust is not an algorithm. How do you know when you meet somebody in the first second so you can trust that person or not? Most of the time, it's right. How do you know that? This is a really important issue, I think, when it's about technology. And this is the key question for us. Are we going to live in this world where we're helping each other to lift? Or are we going to have opposing forces? And who's in charge? Who's mission control? We can't live without technology, of course. That's quite clear. I mean, even in the mountains of Switzerland, where I live, and not in the mountains of Switzerland, it's hard to get away from technology. And the question really is one of what I call digital ethics. Why is this a good thing? This is, by the way, the number one topic. 2019 Gardner says this is the number one topic for this year, is how technology can be kept good. Let's see how we define good, of course. So here's our challenge. And it really is a great fit, speaking about this under the Concorde. I mean, air travel is a significant polluter. And all of us, including myself, I'm the chief polluter in that way, because I'm always flying to speak. Now I started doing carbon offsetting for a while, but nevertheless, technology has no ethics. We shouldn't expect it to. I mean, a computer has no values. It has no morals. I mean, if I tell the computer to make paper clips out of you, it would set to work, right? Until the mission is accomplished, right? That's what machines do. Why would we expect the machine to have, like, you know, Facebook system works? That's the scary part, right? It's not that it was hacked. It worked as advertised, which is to manipulate. That's called advertising. That's the scary part. So let's define ethics. Ethics is knowing the difference between what you have a right to do and what is the right thing to do. If you're a programmer, a startup person, or a CEO of a company, what is the right thing to do? CEO of Salesforce says, Mark Benioff, Salesforce will not sell their software to companies that make guns. Salesforce has hired a chief ethics officer to determine the negative effect of what Salesforce does and to minimize them. Now, this is like saying, I'm going to put sand in the gearbox. Maybe sell less, but I have a better engine. But of course, the sand doesn't make a better engine. So it's a very big question. What would you do if we had a choice of making 10x or 1x, depending on your ethical consideration? Google is in the middle of this conversation. Google, every month, there's a major thing happening at Google where employees are saying, no, we shouldn't be selling this software to those people. Like the Defense Department is using Google's AI. And Google had to withdraw from the contract because Google employees were saying, now this is not cool because it's going to end up in a drone that automatically kills people. And these are very big questions that are going to be erupting all the way around us. Facebook example again, right? That's what Facebook does, gunning at democracy. And is it doing it because Mark is evil? Probably not. Is it criminal? Is it their intent? No. But nevertheless, Facebook is facilitating. Facebook is responsible on an ethical level. So Stuart Russell, who writes a lot of really smart books about AI, he got to read his latest book called Human Compatible. His first book is the number one book in the world on AI. It's used in all universities. I think it's just called AI. Stuart Russell, UC Berkeley professor. So his book is Human Compatible. He says a social media meltdown results from optimizing the wrong objective on a global scale. This is an important statement, right? We're not saying that technology is bad or social media is bad. We're saying it's optimizing the wrong objective, which is to keep people on the side. That's the wrong objective. And that's generally true for technology. If your only objective is to make more money, to make a better mousetrap for your customers, it will fail. And Facebook will fail spectacularly if they keep this up. So technology in many ways is a gift. Here's all the great technology that we have accumulated and then could also be a bomb. Well, that's not new, but now the magnitude is different. I mean, think about a computer. Maybe in 10 years, it has an IQ of a million. I mean, IQ in the sense of computing, right? It's more to the IQ than computing, right? Kurzweil says, Ray Kurzweil says, in 2050, we'll have a computer that has the capacity of all human brains. Now, I could say that's fantastic. If I go back to the first part, that's a present, right? I can use that. But how do I make sure it doesn't turn into a bomb? So that's a question of governance and wisdom. So let me play a couple of video clips here from people who have some thoughts on this. How do we build software that's secure by design, right? We have to really do a lot of re-engineering of our processes, teaching of our own engineers, and what does it mean to do threat modeling in software so that we build more robust software? Same thing with AI. We have to have design principle. Any business, any person who's going to use AI to make any decision of consequence, your child's education, you are going to want to know and have transparency and explainability and trust in this technology. I will tell you there'll be no adoption of AI without that. And those of us who believe in technology's potential for good must not shrink from this moment. Now more than ever, as leaders of governments, as decision makers in business, and as citizens, we must ask ourselves a fundamental question. What kind of world do we want to live in? This is an interesting comment by somebody who runs the most successful company in the world. I think Aramco, the Saudi Arabian oil company, is going public in a couple of weeks. And they are allegedly going to be bigger than Apple. Everybody else is smaller. Think about that for a second. Basically, what he's saying is that we have to question how we use technology. As Satya from Microsoft is saying, we want to be regulated. I mean, I don't know if that's just lip-syncing or sort of greenwashing. But it comes down to this, right? We need guidance on this. So I have suggested that we should start forming digital ethics councils. People who are not always saying, no, no, no, we can't do this, that's stupid, right? This is not about ethics, right? But is to understand the complexity of what we're doing. So in the meantime, we have that in Singapore. We have it in Denmark. We have it, of course, all the companies are trying to set up their own council. I think we should do that here in Bristol. Get a leg up on this idea of what is the right thing to do. Because today, a lot of technology really isn't working that well, like speech recognition or AI. It's not quite there yet. But it will be, right? Then the question is, what do we want? Again, Tim Cook says on this, and it's very important, right? Technology can do great things. But it does not want to do great things. It doesn't want anything. That's so true. This technology can do great things. It doesn't have any intent. I mean, I can make 5,000 of these and do all kinds of interesting things with technology. But what do we want? I mean, who's in charge of what we want? Not technology. And these days, if you listen to the stories about the future, who's telling the best stories about the future? Apart from myself, of course. Just kidding. But who's telling the best stories, right? IBM, Microsoft, Huawei, right? The tech companies are telling us what our future is. That's not the way it should be. Because they're going to tell us a story that they can sell. It's good stories. I'm not saying it's bad stories. But when you see Bob Dylan talking to IBM Watson, it's impressive. But what is our own story? What do we want? So that has led me to a new theory, which I haven't worked on for the last couple of weeks. I think we should have a Hippocratic oath, like doctors, for all technologies and technology companies. You know what a Hippocratic oath is when the doctor says, I am going to do my work so I can do the best for people irrespective of who they are and how they get here. I want to use my abilities without a filter. That's probably no longer the case, actually, universities. But this was the discussion. So here's my Hippocratic oath for technology. I will ensure that everything I invent and able provider cell is designed to further human flourishing. Now, that is a heavy word, human flourishing. Happiness, right? Human benefit would have to fill that out with a couple of really hard facts. We can do that later when we have a bunch of drinks. But here's the interesting part. When you're looking at the current metrics of progress, I'm sure you're all intimately involved in this map, right? This is the risks on the right and the benefit, right? So negative consequences on the left and this is the benefits on the lower scale. And where it gets interesting as always with these kind of business school slides, it's not for me actually, it's from the World Economic Forum, but here, right? The top quadrant, the most benefit and the most risk genome engineering and artificial intelligence. And does that mean we're not going to do any of this? I mean, that would be stupid, right? It's high risk, high reward. What do we do? Well, we create safeguards. We agree on the standard. We discuss how we can get, I mean, this is what we do with nuclear energy. You can have a nuclear reactor for energy today. Any country can have one, but you can't make a bomb. So we have to think about which way that's going. If you're looking at this child, this is from Price World House Coopers. I'm not going to explain this in detail, you can download it later. I'll put this up on my website, futurewrestguird.com. So all the risk, I mean, if you're looking at AI, it's basically a whole, like a, it's a huge amount of different kind of risk, security, performance, control, economic, societal. We cannot just build something and then say, well, the risk of somebody else's problem. Let the government worry about unemployment. Let everybody else worry about inequality. In San Francisco, if you go there, you have inequality pure. I used to live there 17 years. Now you've got 8,500 homeless. And at the same time, the last year about 4,000 millionaires were created in San Francisco. Who's going to worry about that, right? Will the free market take care of that? I don't see that. That's something we have to think about. So let me talk about the rest a bit more and then we'll have some questions. World Economic Forum again says, okay, there's two major things today that are a risk for us. One, climate change, and all the stuff associated with it. And of course you heard about all that stuff in the news the last couple of months. It's really percolating now. Number two, the red one, technology. Data fraud, cyber warfare, and so on. Zero in on this. Bristol, I read yesterday, right? The city council has decided that diesel cars will be banned in Bristol. I don't know if you hear about this, right? So it's a good move for Bristol. I think we should ban all cars. It's a different discussion, right? But I mean, these are the things that are happening. So let's have a quick look at this and then we can zero back to the technology. Climate change is now a number one topic and it's very quickly become a number one topic. I've talked about it for 10 years, but if you look at the stats, right? It's pretty obvious the stats are, yeah, they don't make for light dinner conversation, right? So here, CO2, China, US, the leaders in pollution. And here, sheep, beef, and pork are the major causes of pollution as far as our food chain is concerned. And this map will really do you in if you wanna have a nice day, right? It basically shows you the entire Southern Hemisphere, if we go up to four or five degrees and warming us, has been projected by the climate change panel of the UN. 50 years, right? The entire Southern Hemisphere becomes uninhabitable. 300 million climate refugees. I mean, talking about climate refugees and what we have now, I mean, this is chicken feed, right? So how do we solve this? Second thing is digital pollution, same thing. Using technology that makes us strangers, you know, we have more relationships with our screens than we have with other people. We forget who we are, we forget our skills, we forget to talk to each other. We make love to robots. No, I don't know, not yet, but yeah. So those two things are our challenges, right? Climate emergency and human emergency. We're not quite at the human emergency yet, you know, we're making our way towards this, but this is something we have to think about. Now, if I scared you enough, let's go back here to the topic of how we can solve the technical part. This is what we get in the cloud today, and we get content, we get conversation, we get community, we get convenience, very powerful stuff, and we get it provided by the large digital platforms in the world. These are the American ones and the Chinese ones, but what we also get is a kind of negative output. So these are the externalities, so addiction, bias, manipulation, tax avoidance, that comes out the other end. What we must now be doing is to say we want that to happen on the front end so we can take advantage of it, but we have to address these issues. I mean, if technology is killing our democracy, what's the point of having it in the first place? That's something we must look at, right? So trust, accountability, responsibility, transparency, control, self-control, regulation, big topics. I always say, I think we need an EPA, you know what the EPA is, the Environmental Protection Agency, or rather I should say what it used to be before Trump killed it all, the Environmental Protection Agency for Humanity. Do we need somebody that's gonna say, you know what, this is actually not good for us. Or we could be doing this, but what if the cancer medication, the genomic treatment costs a million dollars and everybody does it that has a million dollars, everybody else dies, right? Is that fair, right? Would that be a reason for terrorism? Clearly say yes, I mean inequality is the number one reason for terrorism, right? So looking at this direction, yeah, do we need to protect this? I think the more that we connect, the more we must protect. It sounds like an opposite, like it can't be done at the same time, right? I think we need to re-humanize technology. And recently I've been talking about it so much that this is not anti-technology, it's the opposite. We need to harness technology for human purposes. So lately I've been calling this the new Renaissance. You know, Florence in the 1500s, with the decision where the whole debate was about, say, okay, our life is not about God, whatever that is, if you're religious, you know, our life is actually about ourselves also. That was the bottom line of the Renaissance. And now we're here and we're saying, yeah, our life is about technology, right? Well, that's not true, right? I'm not data, I'm not a machine, I'm not technology. My wife isn't an algorithm, trust isn't a download, happiness is not an app. That's the new Renaissance. And we have to use technology to make that happen. And that goes with the economic system. We have to think of a larger story, and this is happening now everywhere. I call this a triple bottom line, people, planet, purpose, and prosperity, four objectives. I've been trying to get the Swiss government to build a new stock market, where all the companies like Unilever, Patagonia, and others would list on this stock market that only has companies, what's called in the US, a B Corp, sometimes, right? Go on this stock market to list, like the NASDAQ for good companies, because there would not be much overlap between the two. So I call this sustainable capitalism. And I think when we talk about energy, when we talk about the future of technology, this is the winning horse. And I think we see quite clearly also as far as climate change is concerned, when we put this in the top, we have three objectives. It has to be holistic business model, not a one-sided model like Facebook. Facebook is an exploitative model. Turns us into fodder. Circular economy, everything you take out, you put back in, and with the human focus. And now, for the first time in history, we have companies who are saying, officially, this is what we are doing. Three weeks ago, the US round table, business round table, that's 250 top CEOs in the US, declared that the future of the market is no longer about shareholder return, but stakeholder, employees, partners, vendors, people, planet profit, right? And that's a very big step for American CEOs to say the top line of why we're doing stuff is not to make more money. Yeah, it may be, again, maybe just PR, right? Of course. Nice PR. But a lot of companies are now going in this direction saying, how do we actually make this work? And I think, ultimately, that's where we're going. This is my mantra. I think the future for us is really awesome humans on top of amazing technology. And to be an awesome human doesn't mean you have to be a programmer to understand technology, but it certainly means you have to understand people. Our education is gonna go upside down in the next 10 years. We're gonna have a lot more humanities, ethics, understanding, culture, art, music, sports, and technology, right? If you have to make a choice, that's a tough one, but I would venture to say that in 10 years, if you are understanding people, you're gonna have a leg up, right? If you already are a technologist, this is the skill that you have to add to be powerful in the future. So I close with a statement from my book and this little animation. We have to embrace technology, but not become it. There's a crucial difference. And I happen to think, also, this is a difference between cultures. In the US, we have a lot of culture about becoming technology, transhumanism, singularity, as we have in China. Here in the UK and over there in Europe, I don't know if that's, we'll talk about that later, but we think about this and we're saying, you know what, we want to remain humans, really. This is an important objective. We don't want to become machines just because we could become a machine. So I thank you for listening and I think we're not gonna have a short discussion with Paul. Thank you very much for listening.