 It's really a great pleasure to be here, you know, I like to say riffing off some famous future verse quotes, the future is already here, we just haven't taken notice. And I want to congratulate the city of Barcelona for seeing the future in many more ways than other cities do. You know, I live in Zurich in Switzerland, where things are a little bit different into how we see the future, but it's great to see the only activism that happens here and all the new thoughts and stuff. So congratulations to Barcelona and all the great people in Barcelona. So first I want to say I've been doing this work for roughly 15 years, speaking about the future. And the last two years I noticed that a lot of people are worried about the future. And this is not just because of Donald Trump or the Brexit. People are worried, a lot of worry, because technology seems like it will do everything, literally take over, work, jobs, automation. And that worries us a little bit. I mean here in Europe we are much more humanist thinking. In America people are saying, well, you know, it's great. As soon as I can connect my neocortex to the internet, that's fantastic. I can think faster, or I can not die. Here in Europe we think a little bit different about this. You know, we have different concerns about this. So I want to say that the future is better than we think. But put a big asterisk there and say, well, we have to agree on what it's going to be. What do we want? That is the key question. Because, you know, today technology is largely pretty stupid still. If you speak to your iPhone or Siri or Amazon Alexa or so, it will understand if you speak like this very slowly, it will give you, yeah, okay. In five years technology will be exponentially so powerful that you can speak to your computer as if it was your wife or your husband. And in fact it will speak back to you like your wife or your husband. And then eventually it will materialize as your wife and your husband in a messaging app. In 10 years we're going to be at the age of quantum computing. That sounded like science fiction just five years ago. IBM just showed the first quantum computer at CES in Las Vegas. 50 qubit computing, 3D computing. So that means this machine right here is running my measly presentation is going to be a million times as powerful. So we're going to the future that will have largely two topics and these are my topics. And I wrote a book on this because I kept getting this question what's going to happen to people. So I wrote this book called Technology versus Humanity. It will be available in Spanish in about three months. But in the meantime you know where to click. So that's the topic. And I can ask you the simple question, technology is really magic. You know we enjoy technology. We can use WhatsApp to make phone calls to our kids in South Africa for free. When we can summon the world on our mobile phones and escape in any which direction. We know a philosopher once said long time ago before the internet that technology is not what we seek but how we seek. And what do we seek? Well, simple question what all humans seek is happiness, right? In the widest sense. Contentment. So technology is a tool but it should not become the purpose of our lives. So we have this ongoing discussion between what are called algorithms and andro rhythms, you know, human things. Let's take human things for example really simple things like mystery, serendipity, discovery, excellence, mistakes, lying, changing of the mind. How would a computer understand that today reality is like this and tomorrow I mean no other place to say that than here, right? And tomorrow is different. A different truth. Computers don't understand that. It's zeros and ones, right? That's it. So in this world we're going to have to pay very good attention to this issue, right? Are we going to work with technology or against technology? This is the critical question. There's the balance between the two. There's no way we're going to go back and say well we won't have artificial intelligence. We will not have cloud computing. We won't have smartphones, right? You can try that. You have to be very rich or live in an Amish country, you know, in Pennsylvania and even there. Technology is here to stay and it's growing like crazy. The most powerful companies in the world are not the banks. They're not the oil companies. They're not even the military. They are the tech companies, the data companies. Did you know that Apple, Amazon, Google and Facebook have more market capitalization than the GDP of France? In other words, they could buy France, all of it. Not that they would, you know, probably not a good idea. But think about this for a second. What we have here, I mean, we're moving into a future where a lot of people are arguing that we are essentially data. So I talked to a lot of scientists. I talked to a lot of people in Silicon Valley and in China. And their argument is, well, you know, humans really are technology. I mean, and they're serious about this. I'm not making this up, right? It's just that we don't understand this technology. That's the argument. So naturally, we're going to converge. Upload our brain, become superhuman, become as God. I mean, I'm not religious, but becoming like God sounds like a good thing, right? Why not? Who would not want this to become superhuman? Sounds tempting, right? So I'm going to give you some principles today, seven principles about the future. The first one you just heard, bottom line of this principle is we should embrace technology but not become it. If we become technology, in other words, if we connect all the time through brain-computer interfaces, augmented virtual reality, I think we're going to lose more than we gain. We already have the problem when we use smartphones, that people are in love with the fake world on the smartphone. In fact, you can observe in many places people have more relationships with the screen than they have with people. Now, that is a bizarre twist, right? I mean, just go to any place in Southeast Asia, a nice dinner place, every single person at the table is working on two tablets at the same time, completely ignoring each other. That is what I call toxic technology, poisoning us. So the second principle is the future is a mindset, it's not a time frame. I don't know if you realize this, right? The future is here. Machines can speak, they can understand images, they can drive a car, they can fly an airplane, they can program themselves, they can kill with their own drones by themselves. Not really working that well yet, pretty much across, but in a very short time frame. So we have to think about the future differently. We have to think about the future as it is today and that is four simple rules. First, it's exponential. And this is hard for us to get because we are linear, right? We're proving step by step by step. But technology, Moore's law, Metcalfe's law jumps, right? Every 24 months, every 18 months, transitions are kind of at the end of that curve. Now it's going to qubits. But basically we have four at the takeoff point, four, eight, 16, 32, 68, seven steps to 128, 30 steps, one billion, 30 exponential steps. That's in all likelihood 40 years. The kids of your kids will live in a world that's one billion times as different as today. It's hard to imagine. Exponential thinking is crucial. Don't think for a moment just because it didn't work until last year that it will never work. It just takes longer. And this is very hard to understand when the human brain does not really deal with that. It's also combinatorial, so it's combining all the things. All the sciences, right? Material science, computer science, nanotechnology, brain science, all coming together to create new things. It's interdependent, amplifying each other. It's abundant. Technologies make things abundant, like music, right? You guys know Spotify and other platforms. You know, we have 21 million songs on Spotify. And we have 500,000 movies to watch on Netflix. And we can go anywhere in the world on cheap airplanes, right? In 20 years, energy will be abundant, which basically means almost free, right? Solar energy, renewable energy, 20 years. What would we do in a world of abundant energy? I mean, it's a whole different approach, and then holistic. Today, if you're starting a business, you have to be sustainable, not just sustainable in terms of energy, but in terms of humanity, right? I mean, that is the key point. Energy, yes, we know that clearly. But how about your people? Facebook, for example, I'll tell you later, insustainable as a human experiment as it is today. Because the central business model of Facebook is abuse. And I say that as a user, as a willing abuser. I'll talk more about that in a second, but this is very, very interesting. We need to take a broader view of the world, right? Because business, as usual, is dead or dying. I was in the music business, it died, and was reborn now. The media business died being reborn now. The banking business, blockchain, digital money. I mean, don't think for a minute just because you're in transportation or smart ports or so, it's not gonna be the same, it just takes longer. So we're going on this huge transformation, from the music on CD to the music on the cloud, the car going to transportation, all the things that are happening around us. I mean, the German car industry makes a great example. The business of selling cars is dying. It's dying because you can share a car, you can lease a car, you can buy a subscription to the car even now already, like Spotify for cars. So the whole business model of the car industry is switching to mobility, to access, from ownership to access. It's a huge shift that we're going to see. The third principle of the future, it's all about data. Data is truly the new oil. And the oil companies for a long time went unregulated and we paid for that and then they were regulated. But the data companies now, there's top 20, I'll show you in a second who they are, but of course you know who they are, they're definitely not Spanish or Catalan or Swiss, all Silicon Valley, China. So we're moving to a world where those companies are taken over, right? They go into a place to where we're saying, okay, they're actually running our system, here's a list of them. Thank you. Mind boggling riches, right? Those are the new rulers. In many ways, you can say they're the new dictators and almost all of those are my clients. I'm speaking with a twisted tongue here, right? It's a very strange situation when you're looking at this, you know, they are essentially just US and China and the token Swiss company, which is not really Swiss, just happens to be there, right? But that's where the power is. So I'll ask you one thing, you know, should we regulate data companies, data oil companies, like we regulate the environment, oil, gas, nuclear? What would happen if we don't regulate these companies together, these companies own literally on the world? And this is just the beginning. Should they self-regulate, they should, but you know, are you gonna self-regulate? I mean, think of yourself the question, would you have a company that's more human or would you rather have a company that buys your Tesla? Well, many of us would say, well, yes, let's get the Tesla and a little bit of humanity, but yes, a tough choice, right? These companies are doing this, right? Amazon, Google, they're literally driving into our brain, go fishing in our heads. 100 million data points on Google and Facebook that we willingly gave to them. That is truly a richness that comes on this. And you know, we're kind of wallowing in this data and threatening to get lost in all the options. And this is just today, I mean, think about the future being a thousand times as much quantum computing again, right? I mean, right now there's no way I can take the entire car traffic in Barcelona and feed it into a computer and get some results. It's just too much. Five years, done. Instant. So the fourth principle is artificial intelligence, which I'll talk about in a second, what that is, will have more impact on humanity than the industrial evolution. That's both in good ways, I'll explain in a second, but essentially artificial intelligence is the idea that machines can do something that we used to do, that used to be human, can basically simulate or mimic this. Kevin Kelly, one of my mentors in Futurism, the founder of Wired Magazine, he likes to say, first we electrified, then we digitized, and now we cocknify. I will make things smart. Cocknifying is making things smart, right? So now we tend to joke about this and say, well, you know, the future could be like a smart converter, right? You take in the old industry and you put it in the smart converter and outcomes, smart city, smart farming, smart ports, maybe even smart government, yeah? Could come out the other end. McKinsey says, this is a $62 trillion opportunity, make them think smart. I mean, it's mind-boggling smart, logistic smart cars, and that is basically where we're going in a very, very short time. And then we have this, right? We have all this data coming from our mobile phones arising into the cloud. You know, we're creating copies of ourselves in this cloud. We're essentially creating a brain. In fact, Google calls it the global brain. It's not Skynet, just in case you're thinking of Skynet here, right? Kinda sounds like it though. So basically what's happening here is that this information, this is now our mobile phones, our searches, but in the future, our money, our health records, our driving records, our education, right? Everything that we have goes into this cloud and there's huge benefits like in health, right? The health cloud would be vastly beneficial. But security will be a huge issue here and basically this is a huge temptation. And here we are and I would say that's fantastic if we can do this because we can save energy, we can solve diseases, we can do all the things that we need to go forward, but we need to see who's in charge. And what do we want? Like if we have the Internet of Things in Barcelona, smart ports, smart cities, smart cars, smart logistics, who's accountable? Who's in charge of all that data? Not just in a security sense, but also in a sense of politics. So when we think about artificial intelligence, many people think of Hollywood and I would say right now, please forget anything you've ever seen from Hollywood on this topic. We cannot go into the future based on fear. We cannot go in the future based on stupidity either. A little bit of fear is okay, but this is not reality, this is science fiction, this is Hollywood. So let's go a little bit and say, well, you know, let's hear about the fears of AI. Let's talk about that for a second. It's very important. I mean, do you honestly believe that a machine exists that could be intelligent like we are intelligent? I mean, it's funny, now we don't even know what that means for humans, but we just know stuff like we meet each other in the hallway and it takes the average person 0.4 seconds to measure and rate the other person intuitively without saying a single word. I know if you're a threat or potentially a nice person, whatever, 0.4 seconds, not saying a single word. Can a computer do that? Maybe eventually they could learn that, but I think what we're looking at basically is this, that this artificial brain of computers will have a very hard time with social intelligence. That's even hard to define for what that is with us, right? But we know, for example, emotional intelligence is the highest rating employment factor in the world today. If you ask any HR department, that's what they want. They want people who are emotionally intelligent. In other words, you want people who are emotional. Can you imagine that? 10 years ago, we always said, we don't want emotional people in our company. No way, especially in Germany. But we want people who get stuff done, right? Who can execute. But you know, I can guarantee you today, if you work like a robot, the robot will take your job. If you let your children learn how to be a robot, they will never have a job. They have to be the opposite of a robot. And that, of course, is a huge challenge because we have intellectual intelligence, but that is limited because of the amount of processing. We can't just plug in another. Well, we could eventually, but you know, for the time being. And then we have the machines. Intelligent machines are, by and large, extremely narrow intelligences. They can drive a car, maybe. They could never watch your grandmother, make sure she's safe. They couldn't cook. They couldn't play chess. They can just drive the car. So I think at this point, I would say, we need to uninstall the fear, but keep the caution. So I'm with Elon Musk on this, you know, we should think about what happens when in 30 years they do become like us, right? But that's not next week, you know. For the time we're talking about fancy software, intelligent assistance. For now, the biggest danger is not that machines will come and kill us. Is that we become too much like the machines. And don't look any further than the smartphones, right? Talking about machine thinking, right? I mean, if you're not listed on the dating app, you don't get a date, right? If you're not highly rated on TripAdvisor, you don't get people coming to dine. I mean, that's machine thinking. I know companies that use human resource analytics software to define who are they going to fire next month. The machine reads all the emails, it looks at all the LinkedIn posts, and says this person is useless because, you know, they haven't done enough. That's kind of a limited view of life, I would say. So number five, all technological progress must result in human flourishing. Very simple rule. You know, there's a lot of technology that doesn't really do much for humans, right? It makes money. It might make money. And this is a very, very difficult thing because, you know, why would you give up something that makes money because it doesn't flourish humans? The only two countries in the world that do not want to ban autonomous weapons. You know, weapons that can fly in drones mostly and kill people on their own, right? Are the two countries that make them, the US and the UK. Why, you know, it makes money, so why not, right? Very simple. So this is our challenge here. I mean, we'll picture this, you know, we're only at the beginning of the curve, right? Now we can sort of catch up with machines here, but we are just human linear, we're going step by step learning a few things at a time, you know, and computers are like, you know, they're already a way ahead and if you picture this going up exponentially, the head would be all the way up in the sky. Technology is exponential, but humans are not. And we have to live with that. But you know what, I don't care. I will use the machine because the machine knows all these details that I can't possibly know, but the machine is so far away from our human intelligence in terms of understanding the world. Perception, but how will we sustain imperfection? Humanity, and take the self-driving car, right? So the argument goes, we don't drive any more ourselves, we don't have all the dead people, two and a half million per year, dead, dying car crashes. So that means we're no longer allowed to drive the car. I think I could live with that, but is that going to happen in every part of our life? Taken away our decision-making? Sometimes it makes sense and sometimes it does not, but clearly, this will be the challenge here, this natural progression. Right now we're here, right? And we may have mobile devices strapped to us like monitor on a heartbeat or so, and then we have prosthesis that are good if we had an accident, and then we just connect directly to the internet, the so-called singularity. I'd like to say humanity will change more than the next 20 years than the previous 300 years, and this is not an overstatement. This is not HyperBall or BS, right? This is, I mean, if you're looking at what is happening around us, you can anticipate in 20 years, the wildest, beyond our wildest dreams, genetic engineering, artificial intelligence, geo-engineering, solar energy, space travel. So here's a question I have for you. How far would you go with this? And this is the question we have to answer for ourselves and our governments, right? What would no longer be human? I mean, what is the point of technology if it relieves us of all humanity, right? I mean, we'd be superhuman, but no longer human, a machine, essentially. I can't really see the point in that. So there was a, there's a great company called the Future of Life Institute supported by Elon Musk, and they had a convention about artificial intelligence, and they came up with a couple of key things that I want to share with you for the future, right? The first point is that all systems have to be designed for human values. Not the humans have to be designed for the machines, but the machines have to understand what we need, right? Shared benefit, equality, right? If we are going to automate half of our society and save all that money, where does the money go? Well, if when America, the answer is clear, right? It goes to the 0.01%. But if we don't share the money, what's gonna happen? We don't have any more consumers. We have crime, we have unemployment. So that's very important. We have to think of the ecosystem, how it all sits together. We have to distribute the benefits of technology. And I'm afraid that means that ugly, tax word sometimes, but as Bill Gates says, we should have an automation tax or something like that. And finally, responsibility. The American gun companies are really great at this. They make the guns that are used for 278 high school killings in the last two years. Their argument is guns don't kill people, people kill people. Have you heard that argument before for many technology companies? Facebook is the place where all that stuff happens, but how people use that, it's not our responsibility. That's got to be the cheapest answer you can possibly think of. I mean, if you build something, you are responsible for how people are using it. If it works, now it does work. So the number six principle is that we have to include the externalities of our business. In the oil business, the externalities were pollution. And they were not part of the business model. In fact, the oil company said, well, if we're drilling California, I will give a couple spills and make things a bit more. But that's not our business, because we just sell the oil. In the digital business, what is externality? Aptication, loss of responsibility, unskilling, addiction, and we're only at the beginning of we're laughing about this today, because people don't know how to use maps anymore, because they use Google Maps, right? Facebook, 2012. Zuckerberg's proud paradigm inside Facebook, move fast and break things. And God knows they've broken all the stuff. In fact, you could say they've broken the meaning of friendship, because now it's just about liking. You get like them. So here we are, Facebook 2018, and now Facebook is broken. Loads and loads and loads of people say, this is one of my articles, right? It's time to regulate Facebook, and then we have all these articles saying, this is serious downward spiral, their stock market is plunging, and basically all the changes to the newsfeed. And then some guy, I think it was at Davos, who said that they're like cigarette companies that's selling addiction. So I'm not sure what to think about this, because I use Facebook, but it's a typical example. How do we get to that point, and what do we do about this? I think for our democracy and for our lives, this would be a very bad thing, right? If we're going to live inside of this bubble that feeds content based on algorithms. I think Mark Zuckerberg needs to come and say, we need to reset Facebook back to human values. Whether he can do that with a stock market, unlikely, right? Because they want him to continue on this, sort of cheating the world, right? So I think we can safely say that today, to a very large degree, technology is the new religion, literally. And Google is God. And I think that in some ways we can laugh about this and say, yeah, I can pull it away still, but imagine this world with augmented and virtuality, where it's going to grow like this. Imagine a world where this is all you do to get advice. This is how you do your banking or your taxes or your voting, right? In fact, you can just let it vote for you. And addiction, right? Where you just, you know that feeling when you get a like, right? So somebody liked you. That's like a dopamine shot. I mean, Facebook has a couple of hundred psychologists working on this to make it better that we come back and get another shot. I mean, it's hard to believe, but all right. So last principle is leadership in digital ethics. That is a key competitive advantage. It's not some punishment to be ethical. You know, Bertolt Brecht used to say, dinner first, then morals, right? We had our dinner now. We have a huge digital economy. Now we have to think about how we can keep that human, how we can deal with this. I mean, if everything is moving into the cloud, right? Literally everything. And you know, this is what the Mobile World Congress is all about, right? Connectivity, data, good things. But how do we balance that to be secure, safe, human? Do we still have room here for imperfection? And how would we do that? I mean, to be fair, of course, I'm sure you agree with me on this. Humans are the most imperfect, the most inefficient things you can imagine. We have to sleep, we have all these breakdowns, we change our mind, we lie, we make stuff up, we make mistakes, we screw up all the time. So what's our future of this? You know, where are we going with this? Key question. Here you see on this grid, the most powerful technologies in the world that are bound to make hundreds of trillions. Artificial intelligence, robotics, biotechnologies, the Internet of Things, sense and network. And interesting about this grid here from the World Economic Forum is that on the top right is where you have the most powerful and the most dangerous at the same time. Well, that's quite obvious, right? So how do we balance this? And I would say that it would be used as far as to say, well, we shouldn't do that, it's too dangerous, well, that's not gonna happen, right? I mean, this is the new galaxy, right? This is the new things that we're gonna do. So I think that's the question. Would you agree that we have an ethical imperative to make sure this power is actually good for us? And if we do, who would understand this? I mean, by and large, most politicians are hard-pressed to read their emails or maybe have them printed, but talk to politicians about artificial intelligence or so. There's a huge gap between what we see today and what's possible tomorrow. That has to be fixed. I think Barcelona is looking at this very strongly. I think that's a good thing. Young people, women and people who understand technology, that's definitely the way to go. Oh, there is one more principle here, in fact. Okay, so wrapping this up, coming to the end of this, what is happening with work clearly a big challenge? Our future is human-only work. In the old days, we used to toil in the agriculture and then we had tractors doing that for us. And now in the future, we're going to move to a world where basically anything that can't be digitized, automated or virtualized will be. Anything. Bookkeepers, fast food, truck drivers, airplane pilots, financial advisors. The other day I saw a great demo of a dental hygienist robot. That sounds very dangerous. I have to admit I wouldn't try it myself. But in Dubai, you can fly in a drone, right? You can hop in a drone and be flown around. I mean, that is one crazy idea, right? But so a lot of people are saying, that will make us useless. I don't agree. I think because that is the flip side of the whole thing is that anything that cannot be digitized or automated becomes extremely valuable. And that is 95% of our lives. I mean, that's, we're fair about this. We don't live because we suck data, right? We live in a whole different place. And all these things, as Moab Etch said, what's very hard for a computer, it's very simple for a human and what's very simple for a human is hard for a computer. Or vice versa. You understand what I'm saying. I said the same thing twice, but that's fine. So look at these attributes. Would you rather have your kids have empathy, creativity? Or would you rather have them a degree in programming? Hey, if they can have both, great, right? That is also possible. So here's a short speech by Jack Ma, only about one minute, at the World Economic Forum. Education. It's a good challenge. You turn this up. If we do not change the way we teach, 30 years later we'll be in trouble because the way we teach, the thing we taught teach our kids. Other things, past 200 years, it's knowledge-based. And we cannot teach our kids to compete with machine. This is Jack Ma, the CEO of Alibaba. It's interesting for a Chinese tycoon to say that, right? But the message is totally clear here. Compete with the machines, word toast. We can do that today, because the machines aren't happening yet, quite yet, right? But a lawyer that does non-disclosure agreements and e-discovery, you think that lawyer is gonna have work in five years? You can forget that. It's just as certain as music is in the cloud now, not on the CD. So we have to think about how we can reproportion and the World Economic Forum is a great graph of how the skills are changing. And that's what we have to teach our kids, right? Critical thinking, creativity, emotional intelligence, cognitive flexibility, and how do they learn that? Do they learn that by being an MBA? They may, at a good school. Would they learn that by traveling in India for half a year? Definitely. Otherwise, they wouldn't survive. So it's different skills that we have to teach our kids basically on that balance, right? And how do you teach a cue? Well, you can't teach a cue. This is something you acquire by action, by interaction. As I always like to say, you know, basically things don't happen in presentations. They happen in conversations. This is a different part of the brain where we go inside and figure out what the next step is. So the final one, in this digital future that we're looking at, where everything is connected and that's basically not stoppable. It is our humanity that makes the difference. Right now, if you're bad in tech, then you're in a good club. Most people are, most companies are. In 10 years, everything will be connected. Everything will be efficient. And just by saying that you have an efficient business is not going to make any difference. Everybody will use technology. This is what we have to think about. How do we put the human back inside of technology? On top of technology. Not how we stuff technology into everything that's human. So we replace the doctor, we replace the therapist, we replace a visit by Skype. So it's a question of balance. You know, how do we go forward with this? So my conclusion is a key takeaways. First, a couple of things from the talk already. Point number one, we'd have to develop a future mindset, a mindset that is exponential. And it doesn't matter how old or how young you are, it doesn't make any difference. You have to understand the future because the future is going to happen so fast, you think it's only a week. And we're going on this ramp now. It's, I mean, the last couple of years, you ain't seen nothing yet, as the song is saying. We're looking at the end of routine. The future of work is humanity on top of technology. Look at your job and say, anything that's routine, outsource it. Let somebody else do it, let the machine do it. Data is the new oil. We have to think about how we regulate this and what we want. And I foresee a lot of these companies either being broken up or being forced to change their practices or being more transparent. But this is a huge battle, right? Because we're talking about very, very serious industries. Thought leadership and digital ethics. This is a great spot for Spain. This is a great spot for the Mobile World Congress. It's not enough to be connected. You know, I've been doing mobile work and working with operators for 15 years, and they always talk about how great it is to be disconnected, to be connected, right? But I'll tell you a funny story, you know. In the countries where you're not connected, you have no greater wish than to connect. And in the countries where you are really connected, you have no greater wish than to disconnect, right? I mean, today, if we go offline, it's like a luxury. It's like, ah, you know, I can be left in peace. Some of us, at least. So we have to think about what does it mean to be human in this digital world? Is connectivity the holy grail? I think it has to make more sense than that. Number five, embrace technology but don't become it. Technology should serve human flourishing. It is not the purpose, it's the tool. Very important to keep that in mind. So I'll leave you with this admonishment at the end, whether our future is heaven or hell, is up to us. We have all the tools, we can make it heaven, we can beat diseases, we can switch to renewable energy, we can create an equitable world, technology will allow all of that. But on the other side, we have to tell it what it should not do. We have to make sure that we can still remain human in this context. So bottom line is really between those two poles, right, good technology, bad technology. We have to invest as much in humanity as we invest in technology. That's the key point. And I think if we invest a lot in humanity, we're also going to get better returns in general. Because it creates sense and makes sense for everyone, it creates better brands. When I thank you for your time, Bella has run, so I gotta go. Thank you. Thank you.