 So, it's my pleasure to be here with you today. I'm originally from Germany. I moved to the US when I was 22 years old to become a musician. So I was a musician and a producer. I went to Berkler College in Boston. I had a career of 20 years, roughly 15 years, made 20 records until the internet came along. And in the mid-90s I realized very soon that the internet was a huge change force for the music business. And I went into digital music, did a bunch of stuff, I started a company like Spotify in the late 90s, you know, big mistake too early. But so I was in the music business and then in 2001 in San Francisco all of us went bankrupt. You may remember that time, if you're old enough, was quite nasty. And I wrote my first book called The Future of Music. And it was about the music business. And Spotify is based on the idea of the book, which is music like water. That was in 2004. And I became a futurist after that, talking about essentially what is facing us in the next 5 to 10 years. So my work is not really rocket science. My latest book, by the way, is Technology vs. Humanity. It's kind of a provocative title, we'll talk about that a little bit more. We have 300 copies out there waiting for you at the coffee break so you can come and get one signed and you can sell it on eBay for a few more dollars with a signature here. So I'll be there signing the books. But this is really my job. There's nothing miraculous about my job. Maybe a certain kind of talent to do this, but it's really just listening. In China they say, if you want to know about the future, ask your children. Because they can play, they can experiment, they can find out new things, you know, we're very busy in focus. Just paying attention to the next 5 to 10 years, the obvious things that are happening. There's three things that I work on. One is data and intelligence, what goes all around us. And the other one is humanity. You know, there's a very big difference between the two. You know that basically our life isn't an algorithm. Unless you're from Silicon Valley, in that case. Just kidding. But an algorithm is a binary function, right? 0, 1, 0, 1, 0, 1. And it can run up to trillions of feeds and do all kinds of things. But humans aren't really binary, right? Think about this for a second. Today you say, well, I like this the more you change your mind, right? You come up with a slight modification of the truth, no problem. You tell a story a little bit differently. You infer information by talking to somebody that doesn't really exist. You meet somebody and within 0.4 seconds, you know, this person is not, I'm not going to keep talking, right? You have this connection or you don't have it. Because we're a little bit more than algorithms. Some people would argue that, but I think we're beyond an algorithm. We're essentially what I call an andro rhythm, right? A human biology. So it's very important to keep that in mind when we think about the future and where we're going. I released a new film yesterday. It's called We Need to Talk About AI. The URL is right here. It's impossible to reach, but we need to talk about AI. You can look for that on YouTube. Just went live yesterday if you're interested in artificial intelligence. So I kick off by saying that, you know, these days I travel the world. You know, I have about 46 people working with me on future scenarios and that we do with companies. And we keep hearing a lot that people are worried about the future. I don't know how you feel about that here in the US. You know, I live in Switzerland, but 20 years here they sort of made me half American. So thinking about this saying, well, a lot of people are saying, well, you know, the future is not going to be much fun. The robots will take our work. Politics are a mess. Everybody has a dubious, you know, potentially terrorist slant. All these things could go wrong. And of course, there's geoengineering and, you know, all of the stuff that we hear from Hollywood. But the reality is, you know, the future is really better than we think. You know, there's been so many amazing things. You know, when we look at what we hear every day, we hear stuff like this, right? We hear how bad things have become. And that is the truth, of course. Climate change, CO2, freshwater declining, that species and so on. And then we see stuff like this, you know, the end of work, automation, inequality. We hear that every day. Not to say that's not true. That is true, right? But let's take the flip side. Decline of illiterate population. Dramatic in the last 50 years. World population living in poverty. Total decline. The possibility of replacing oil and gas with solar fuels. Essentially 20 years away, we're going to live in a world of abundant free energy. It's hard to imagine. So those are all good things, apart, you know, if you're in the oil business, maybe they're not, right? But even as an energy company, I know energy is here as well. Even there, you can say that's going to be a good business. Doesn't really matter if it's just, you know, coal or gas. Energy is a good business. The future is better than we think with the big asterisks. So we have to collaborate to make it so. Technology is giving us power that is just unbelievable, right? We're becoming superhuman. I mean, I'll show you some examples of this, but I was in Japan three months ago. I use an app called Say Hi. And this app is an automatic translation app. As long as I have internet, I can speak to people in like 34 languages in real time. I had a conversation with a sushi chef in German, and he spoke back to me in Japanese through the app. And as long as we keep it kind of simple, it worked. Mind-boggling progress. But then, you know, if we're looking at this, and we can say that we're in this evolution of how humanity is unfolding. You know, we're here now, and if you have kids, you know what I'm talking about, right? It's humanity will change more in the next 20 years than the previous 300 years. I mean, on this change reaction, we can say that basically we're going into a world of where we're going to connect one way or the other all the time. And that is a huge thing. It's not just positive. It also has issues, quite obviously. How much do we want to depend on this, and where are we going with this? You've heard this many times before, but it's finally true, you know, data is the new oil. I've talked about this for 15 years. People used to laugh about this because oil was so powerful, right? The most powerful companies in the world today are not the oil and gas companies, right? Or anybody else, it's the data companies. Platforms, organizations, social network, surge engines. In 2016, 7.8 trillion dollars was made by selling data information, data mining, in some form or the other. And then Antonin G from Biden said the other day, I think it's very true, artificial intelligence is the new electricity. So if you take oil, that's the power. And now how are you going to get the oil out to people? The electricity, the network, that's artificial intelligence. Because these days, you know, our information is growing so vastly every day that without artificial intelligence, we can't do much with it. 98% of most data is useless. It's unstructured. So this is really what's happening around us, and my good friend, Kevin Kelly from Wired Magazine, said the other day, the next 10,000 businesses will just take an old business and put AI in front of it. I'm sure you know what I'm talking about, right? So today you would say, ah, this is the new toothbrush powered by AI. You know, this is the thing that you hear every single day. But really what they mean to say is that it's technology that's not stupid. That's AI. Intelligence, like we, human intelligence, we don't even know what that is. I'll explain shortly. I mean, human intelligence is way beyond the binary instrumentation of machines. So systems driven by AI are really just fancy software, but what we see here is entirely new businesses. The connected car, and of course, now that we can just sit back and relax, advertising in the car. I mean, this didn't even exist, because we'd have to drive ourselves. I mean, let's make no mistake about this. It will be a long time before we can sit in a self-driving car in Germany on the Autobahn, you know, and go 200 miles an hour and eat a Broadwurst, you know, that's going to be some time. But in Los Angeles and the traffic jam, it works fine. And going from Las Vegas to the airport, you know, from the hotel to the airport in a self-driving car, that's reality very soon. That's going to open up a bunch of different things. Smart cities, you heard about this, right, basically the internet of things, connecting traffic, logistics, supply chain can save up to 60% of cost on supply chain. Gas, time, I mean, fulfillment, mind-boggling opportunity, and of course, huge security problem, right? Because, you know, whoever can get inside there has the real power, so that's something to think about. Then we have, of course, technology that allows the complete network of online, offline, Amazon Go, for example, and where we can just walk in and get something in Seattle and not even actually register or do anything, just use the mobile. So these technologies are everywhere and one key term here is machine learning. And machine learning is an interesting, you know, every time you turn around, you talk about AI, people say, well, it's machine learning, right? So here's the definition just quickly. Machine learning is the science of given computers the ability to learn and find insights without programming. That's the key word. So in other words, if you're in advertising or marketing, machines are learning from observing and then they can simulate something and come up with a better solution. This is a very, very big deal, and it's happening all around us, and then we have us, right? On the other side of the equation. How do we learn? There's dozens of different ways how we learn. I mean, we're living in a world that's much beyond what machines are called learning, right? This is why I don't like to use those worlds like learning or thinking or intelligence because they kind of imply like it's us. But I think if we just let the machines do what they do best, which is the heavy lifting and the data, that's fine. That's plenty of benefit right there. We don't have to have the machines be like us. That is a very bad idea. So what happens, for example now in self-driving cars, you would see in the near future, lots and lots of different things happening like this from MIT is showing you how cars can navigate an intersection automatically. That is of course provided that they're all self-driving. And if this works, we're looking at the end of road science, traffic science, and street lights. Because of the autonomous intersection, people don't even stop because the system knows the position of every single car. We just jammed through the intersection without even stopping like down here, right? That's called the black box problem. So looking at this, imagine this for advertising. I mean, a system that is completely self-organized, self-running. In a way, we have that now with programmatic advertising. But this is gonna be a big issue because in this kind of traffic, you wouldn't be allowed to drive. I mean, imagine you get to an intersection where everybody's automated but you. You'd be in deep trouble. So that's something I have to think about. What does it do with cars? And really, this is our challenge, right? Technology is exponential, but humans are linear. On this curve, we keep learning and improving marginally. But technology, Moore's law, Metcalfe's law, technology is now at the point of 4, 8, 16, 32, in roughly 10 years, technology will be so far ahead that we have absolutely zero chance of competing. Because we're not unlimited. We have to sleep. We can't just put a plug into our brain and get more juice. It's very important to keep in mind when you're talking about marketing, you're talking about people. The information about people, that's unlimited. That's exponential. But relationships? They're linear. So more of it, Hungarian scientists said something very important once when it's about technology. Whatever is very simple for a human is very hard for a computer and vice versa. So keep that in mind. The way I look at it, we should use technology to make that work easier for us. We should not use technology to make our work go away, the work that we should be doing as humans. Creativity, design, relationships, trust, meaning, purpose, all those minor things that actually matter. Let's keep those in mind because I think that's where we're going in the future. If you look at what's happening today in media, algorithmic media, so-called, there's a great benefit to that. Primarily that it's free and it's cheap. It's not easy to do, but there's very little humans involved there. And we have all the debates about what that means for democracy. And just recently, right? Mark's debate at Congress was very interesting. I think Mark did really well there only because the questions of the Congress were rather useless. Actually I think all of us could have done a better job, right? But it was interesting what happened and subsequently how Facebook has increased all of the expectations of what they're doing themselves. And this has really become a soul-searching process. I mean, this is right in your turf. How do you market to people without obsessing over tracking surveillance performance? That's a big issue. I mean, performance is such a big deal today because we couldn't get it. It just didn't work well enough. It's working always better, but imagine 10 years from now, performance will be 1,000 times better. Technology will be able to do this. So now we're looking at things like what's happening in Europe, this whole discussion about how good is performance allowed to be? Are there limits? Well, limits didn't really matter until now because as I said, it wasn't really working so well. So we didn't reach the limit of what is a good thing. And now we did for the first time in this whole debate about what happened on Facebook. So there's two variants in this. One, the future could be like this, where we are literally drowning in data. That's kind of already, I think, some people already feel like, you know, constant notifications. Or we could become superhuman. Where we can reach the universe, right? No limit to what we can know. And it's funny, you know, every second movie that's made in Hollywood these days has the same motif, which is that we become superhuman by plugging into the internet directly. I call it the singularity. Now this is a very dangerous thought, obviously. Because imagine for a second that you were superhuman. What would you lose? Well, clearly if you're superhuman, you wouldn't have any accidents, mistakes, lies, you know, serendipity, discovery, emotions. It'd be quite difficult to do when you're essentially part of a machine. So technology really is what I call in my book, hell then, hell and heaven. I mean, it's funny to say, I think right now it's 90% heaven, you know. I'm quite happy with my news reader and my iPhone and WhatsApp talking to people around the world for free. I mean, the telecom companies are losing a hundred million dollars a day because of apps like WhatsApp. So I'm happy about lots of those things, but what happens in the future? No, we have to think about where this is going because this is the kind of deck that we have as humans, right? Values, ethics, humanity. This is just basic stuff. As the Dalai Lama once said, you know, everybody has ethics and some people have religion, right? There's a completely different cup of tea. This is really what we are. And then technology adds a new card every day. Now the technology, quantum computing, geogenic engineering, artificial intelligence, every day there's a new thing. I mean, scientific progress is just absolutely mind-boggling. It's just explosive. So then we have to ask the question. Today it's really no longer about this whole thing about, you know, if we can do something, that is kind of a question today. Does it work? How much does it cost? We discuss it all the time. This is the future question. Why? That's the only question we have to ask. In five years we won't be sitting here talking about how much it costs or what the performance ratio is. We will ask the question of why are we doing this and who is doing it? Can we trust them? The only question that matters to humans is why. And why do we do things? Well, ultimately, of course, happiness, right? Customer happiness, our happiness, that's kind of the goal of what we're pursuing. So it's very important, you know, I talk about this a lot. I think we need to think about what are called digital ethics. How do we behave in a world that's so powerful, you know, beefed up by technology? This world, you know, it's basically, the question is, who is mission control? Who controls all of this? Do we control it? Somebody else controls it? And lucky enough, since we're here in the US, you know, 90% of technology is controlled in the US, right? But in Europe we feel like, you know, oh, that's kind of interesting. You know, how do we control our destiny, our digital destiny, our information? Hence, all the discussion about privacy and the GDPR and all the different discussions. But this is the key point, right? When we think about technology and how that relates to us, this is the number one challenge and opportunity. I'm sure you're with me on this when I say that every challenge is also an opportunity because the people that rise up to the challenge are those that lead the market. And I can guarantee you, I think marketing in general, you know, this is the whole discussion about how we do this in the future by creating relationships rather than surveilling people, tracking people, chasing people, creating noise. How do we create value? Content marketing, yes, of course. But there's all debate about, you know, how far do we go? You know, this world, you know, I live in Switzerland so I like to use this theme of the connected chaos. We're actually in Switzerland now connecting the cows. So every cow gets a little RFID band, right? And they can walk up to the milking machine. The machine knows who the cow is and, you know, milks them any time they want. It's $100,000 for one of them, right? But the cows are completely tracked now. And they're very happy with that because they give more milk and that sort of thing. But once we're really connected, I mean, think about this for a second, right? The more connected we become, the more important the ethical framework is. We're going to connect all media, that's already happening, right? Netflix knows all about your binging habits, right? We're gonna connect our bank accounts. Amazon is becoming a bank too, right? That's already been announced a couple of weeks ago. We're gonna connect all our little digital districts, our driving, our healthcare records, our DNA. It's very important that we find a way forward by saying, yes, you can use it if. We can't just say, well, it exists, so we can do that until now because it wasn't working so well. Didn't really create all that friction. But imagine, I mean, if my healthcare data is available, who's allowed to use it? My driving data, my home data, my smart city data, my smart home data, my media data. So that's where we are right now as everything is moving into the cloud, everything. That is a given. Because the cloud is super cost efficient, it's fast, it's powerful, it enables new research, new possibilities, so media, health, traffic, money, digital money, the blockchain. When we do this, we have to ask a question, who is accountable? We can't just say, well, that's fantastic. Everything is in the cloud, so free for all. I don't think that's a good idea. I mean, we're going to see instances where your DNA is cloned, right? Where there's a clone of you walking the streets. That sounds like science fiction. We're only 10 years away from that happening. I mean, we already make a digital clone, which I'll show you shortly. But I mean, this is clearly the question, like when we use social media, they're sort of distracting us and could be perceived as good or bad. But in the future, artificial intelligence, quantum computing, the blockchain, genome editing, virtual reality, well, that's not the future, that's the day, right? It's here now. So we have to ask the question, right? Is it more important to have algorithms or is it more important to have relationships? Or the answer, of course, it's both, right? I mean, we should not put algorithms over, andro-rhythms, you know, human things. We should not always put convenience over consciousness. And this is a very difficult thing because convenience is, you know, that it makes money, right? It works. So now we have to think about where do we go from here? China, for example, has a credit rating system called Open Sesame, where every citizen is going to be rated based on social media, based on your tracking records, based on your public information, which in China means everything, really. And so everything that's available, you have a number between one and 700, and that's applicable if you want to buy a house or get a job or on mobile apps. That feels kind of extreme to us. And we have things like, you know, you see in Black Mirror or the UK TV show, I think it's on Netflix, right? Where rating everything becomes the new thing. So the thing is in the end, you know, this is not just about privacy, it's about being human. And this is where we have to draw the line. Good marketing, in my point of view, gets the information that is supposed to be given to us, creates a relationship, creates value, and creates a mechanism where we avoid these issues. And that is what we have to invent. I think that's the process that we're looking at right now. How would that work? How would Facebook, the biggest country in the world, so to speak, what are the 2.1 billion users? How would they do this? I mean, that's a soul-searching struggle, right? Clearly, you can see Mark wrestling with that every time he presents something. How does that work? I think they'll get around to it, but in the end, really, this is the hard part, right? Technology can do both. And technology has no ethics. Why would it? I mean, technology is a machine. It doesn't know what you're thinking. It doesn't know what you like, really like. It knows the numbers, right? But it cannot make moral judgments. A self-driving car could run over this person or that person, it wouldn't matter, it's all subject-based rules, right? If it had to be. We're gonna have to put that in. We're gonna have to add that. So everything in technology has good or bad views. I mean, you can be addicted to television, right? And lots of people who were or are addicted to television or the internet or social media, or the mobile phone, or very soon, virtual reality. So we have to find a balance, that's important. Here's something interesting, experiments in ethics, right? Mattel had a doll called Hello Barbie. I think it's been taken off the market by now. This doll connected to the internet and the kid could speak to the doll in real time, like Siri Cortana, Google Duplex, maybe, I don't know. As if it was a person. Think about that for a second. What does your kid learn? Your four-year-old kid, by speaking to a doll that's connected to an AI, right? It learns that people are pretty stupid, right? That's a great thing to learn for a four-year-old. And then any other person would look just too far to complicate it if I've spoken to the doll for four years, right? You know, why would I have anything else? There's an app called Replica. Please don't try it, okay? I don't want to send business to them. They actually replicate you in the case of your death, your loved ones can keep on talking to the app as if it was you. I mean, that's powerful, right? Straight out of Black Mirror. So now, if you're working for a band, this is the topic for the next few years, data ethics. The ethics of technology. Why do we do something? What do we give back? How much do we respect the user? And the opinion is setting the course on this, you know, to apply this whole thing in ethics. Just a great article I saw yesterday. Only ethical marketing will stand the test of time. And this is your job to figure out. And I can guarantee you, you'll get more results for that, not less. So this is our mission to figure out where does this go? And this guy, Dolf Seidman in the New York Times, had a great article the other day talking about this. He says, this is not an engineering problem. It's a business model problem. Fundamentally, it will take more, what he calls, moral wear, right? Like software, right? Our context of how we do things. Not seeing people as a clique. Now, the reality is, of course, how do we combine those two worlds? Because we still have to deliver results. I mean, I advertise for what I do. I want results. I don't want just ephemeral talk about ethics. So now we have to think about where do we go with this? And this balance between efficiency and freedom, security, privacy, intelligence, and humanity. That's something we have to strike in our technology and what we do. And when we look at our reality, we have plenty of this already, right? We have plenty of technology that makes copy of us in a social network like LinkedIn, which is good. You know, that's why we use it. But it's essentially making a copy of what I pretend to be. Pretend. Amazon Echo, Google Home, same thing. Very useful, very convenient, but you have to think about, okay, well, what do I think about this kind of creating an asset that's kind of like me on the other end that I can speak to? How do you reach people without overreaching? And the answer is simple. When you have a relationship, you don't overreach. When you have a real relationship, it's not about efficiency. You don't love your husband or your wife because they're efficient. If they're inefficient, maybe you wouldn't love them so much. But you have a relationship, right? That's complex. And this is what we have to understand about, you know, what I call now, I'm working on a new book about this whole topic, the old philosophy of advertising is data mining on the internet. The new philosophy is data mining with a Y. This is my data. I'm gonna allow you access. If I like Wolverine or whatever, I'll give you access. You can become my friend. If I like General Motors or the Hilton brand, maybe I'll do the same, right? But it's not this kind of mining in the sense of what we thought about it for 20 years ago. So this is important about technology. We, you know, Steve Jobs in his speech has talked about this a lot, rest in peace. Every second sentence that he said was about magic technology. And it was so true, and it still is true. You know, magic technology is just absolutely amazing. But then when we use it too much, we get a little bit manic, you know, like the smartphone. That's still kind of, we laugh about this, but you know, on the other end of the spectrum, it becomes toxic. It's poisoning our lives. I mean, just go to a restaurant in Southeast Asia, like Malaysia or so, and there's people, I kid you not, entire families, everybody at the dinner table has two devices working on the same time, while the food is in the middle, you know? I mean, that's, I would say it's pretty toxic for us, right? This kind of, I can't imagine that you would have actual relationships with weird people. You probably love the screen more than your kids. I think that is an issue. When we talk about advertising, we have to focus on the magic. A little bit of manning is okay. I think all of us get into that. Like, you know, if you're a Spotify and Netflix user, you get kind of mannical about your lists and all these things, right? But toxic, no. And we've gotten pretty close in some instances lately. That we need to watch out. Barely survived, I think, this whole debate about what's going on in social media. So, a couple of magic examples. I love Spotify. This is a magic use of technology, the discovery playlist. And it's not done by people, it's done by an algorithm. And you know, I used to be a musician and I made 20 records and some of them are on Spotify, but with the different names and don't look for my name there. Save yourself the trouble. But the other day they made a playlist where out of the 100 songs that they recommended, 20 of them were mine, right? There's no way that Spotify could have known. But it says, you probably love those songs. And now, of course I do, right? They're my songs, right? I mean, this is smart technology that is of service to people and there's, you know, no drawback there. The north phrase is using IBM Watson to create a web-based interface to create intelligent responses to people that want to buy a jacket. Using essentially live data for vacation retreats and hiking and so on. Very powerful stuff. Airbnb, right? The experiences that they're offering. That's not done by people, right? It's an algorithm that figures out what other matches, how much can you charge? What's the weather going to be like? I mean, that is, we're talking about earlier about the experience and, you know, the idea of understanding how that all goes together. And of course, you know, this German bank called N26, which is cleaning up the German banking landscape right now. So all the kids are signing up there because a really cool experience has no real buildings, no real money, no transaction fees. It's in the cloud. So powerful experiences, magic technology. Now, let's listen briefly. You heard about Google Duplex, you know, the idea of a machine talking to a human and you would not actually know the difference. Here's a short example. So it's interesting, right? Is that magic? Is it manic or is it toxic? Are they going to say this is a machine calling you? Then you probably hang up, right? So it's an interesting angle, you know. I think, okay, could be amazing, useful, convenient, strange, creepy, confusing, dehumanizing, hard to say. The jury is out on this, but I think we have to think about this. Everything that we do has to in the end make create a human purpose. So it's a typical example of how we push in the boundaries like virtual reality, you know. Maybe it'll be so good that we don't even want to go without it anymore. And we're just kind of falling off the cliff there. And so that's a key question. How much technology is too much technology? And this is not just because I'm 56, yeah. I think even for a 15 year old, one of my sons is 23, and he asks a question all the time. You know, how much of that is actually good for what I want to do, what I want to reach on. So here's the key future principle, I work a lot on these future principles. More intelligence, machine intelligence, must always be balanced by more humanity. More data must be balanced by more relationship. I think that's the key to our future. That's also what some people call the EQ, where at the emotional quotient, that's the opposite of the IQ. Machines don't have EQ. Machines don't give a damn about emotions or stuff that's in between the cracks or understanding us. It's just about numbers. Because you know, technology is, why to describe this as something that is really powerful and we can do many things here, but philosophers have always used this term called PERMA, that's really what we want. Technology is not what we seek, but how we seek. And this is what we seek, right? Positivity, engagement, relationships, meaning. It's very important to keep that in mind in which way we're going. And if you're looking at the sort of brave new world of artificial intelligence, I mean, this is in the news every single day, kind of makes you feel like the world is gonna be pretty much run by machines. But fear not, we're far from it. Lots of things that these machines can do, but they're far away from what we as humans can do. This idea of artificial intelligence chiming in on our decisions. It's interesting and it works, right? But it's still just in the neighborhood of being a tool, right? It's not something that replaces human relationships. I think if we're going here and we're saying, okay, this is the key question underneath all of this. How computable are we? Is that the digital girt? Is it possible to express ourselves in data? Are we just algorithms? I mean, it's interesting when you ask that question in Europe, you get a lot of people are saying, oh no, no, this is just, right? But in China or Silicon Valley, people are saying, yeah, maybe we're just technology. That is a question we have to answer eventually. Otherwise we can have these kind of situations where machines can give us good counsel, right? Or we can just get a download of the latest skills. That could be quite useful as envisioned in many science fiction. If we're just data, then this is our logical destination. But here's the thing in the end. Human intelligence means a lot more than just numbers. So we have at least scientists, say between eight and 10 different types of intelligence. And four of them are social, intellectual, kinesthetic in the body, and emotional. And machines just have one of them. I wouldn't call that intellectual, but in terms of computing. And we're going to have a machine very soon that has the computing capacity of the human brain. Well, that already exists, just as it would fill the whole room, you know. But in five or 10 years we'll have one machine, we'll have the computing power of one human being. In 2050 is the estimate, we'll have one machine has the computing power of all human brains. But does it mean it has this? Will the machine understand that things are not explicit? And will it care? I think machine intelligence is not at all like human intelligence. And I think that's a good thing. I think we can use it for that reason. We can do many things. And really most of these things are what I would call IA, not AI, intelligent assistants. And here's a list of really trivial things, like risk management, financial analysis, investment management, trading, software jobs. And if you're in a marketing business, this is primarily what's happening right now. It's not replacing people, it's replacing other software. And so if we're looking in this direction, I don't think we should be afraid of this technology in the same way that we think of that as being ex-machine or something, right? I mean, this is a big deal. The real big driving force in your business is intelligent assistants, IA. There's machines that can do stuff like generate a travel artillery. Stuff that software used to do, but that actually becomes possible. This kind of idea, that's still pretty far away of getting into our brain in such a human way. So let's be sure that when we think about intelligence, we don't fall prey to this, what I call machine thinking. I mean, it's funny how many people think like, the machine always knows better. Technology always does a better job. We think of people as algorithms. We have to be very careful with this. And I think AI is one of those key things. When we look at movies like this, X-Machine, there's dozens of those now, it kind of makes us think that this is tangibly close. Yeah, why not? It's just a fake human. The reality, however, is pretty far away at this point. We are still at that point to where this kind of what's called the intelligence explosion, a machine that can do anything. That's 50 to 100 years away. So I think our kids of our kids of our kids may have to worry about this. For the time being, I think we have to reduce the fear of this technology, not the caution, and focus on safety. So that's gonna be very important when we talk about artificial intelligence. Let's talk about work briefly and then we have a quick question and answer session. So what is the future of work in all of this? Are we going to become extinct? Are we the horses of the digital age? Horses used to be everywhere and now they're just toys, pets. And here's the thing, I think basically what you see happening with technology is that anything that can be digitized or automated will be. Anything that's routine, database, filing, research, all the things that is nuts and bolts, rational thinking, machines are learning this. 10 years, approximately bookkeepers, pilots, drivers, financial advisors, but only on the level that doesn't require human expertise. And this is why I think it's not true that we're going to be useless humans. Do you become useless when the routine is done by a machine? The answer is, well, if you know anything about what you're doing, you're going to move above the level of the machine and perform above the routine. Because the flip side is true, anything that is not automatable becomes extremely valuable. And that happens to be the things that we can't define. Compassion, intuition, emotions, creativity, imagination. Einstein once said, imagination is more important than knowledge. That was before the internet. Because computers and machines will have knowledge. They already do. I mean, look at Google Duplex, right? It's not going to be too long before you can ask pretty much any question. Even futuristic questions. It'll take my job. But here's the thing. In order for us to take advantage of this, we have to get out of this old fishbowl. We have to think of us a little bit differently. And that's what's happening right now. I think we're at the point where we can say, well, using all that cool stuff, we have to realize a couple of really simple things. Machines can do lots of things, but machines don't do relationships. You seen the movie Her? The slight problem was that she didn't have a body. So she was making love to another 3,500 other guys. Turns out in the end, not a problem for a computer. But machines don't do relationships. And relationships are 95% of what matters to us. So combining those two things will be very important. It's a huge skill shift from the left brain, you know, the mathematics, the calculation, to the right brain. A skill shift the World Economic Forum has pointed out. The skills are shifting from 2015 to 2020. So critical thinking, creativity, emotional intelligence, cognitive flexibility. If you have kids, or you're about to have kids, this is what they have to learn. What makes them more human. If your kid can be a scientist, great. A great scientist, even better. But a programmer, right, a programmer, machines will program themselves in five years. Hard to believe, that's already happening. So looking at what that means for marketing, quite clearly, you know, these are the skills. Technology will cover that part of the brain, here. And that's where we're going. Our skills will be more on that side. I talked about efficiency before. It's important for us to realize that efficiency and what you call performance sometimes is crucial, but it's not the ultimate destination. Because the bottom line really is this. Computers are for answers, humans are for questions. Humans are not about efficiency. We appreciate efficiency. But really what matters for us are the right questions, the purpose, the relationship, the trust. Those two things. So I think when we look at the future, we shouldn't be too worried about the machines taking over at least for the next 50 years. You know, we can talk about that a little bit more, but for the time being, our biggest problem is this, right, that we become too much like them. We become too lazy to do anything ourselves. We use machines as a shortcut. We build relationships with web pages and apps. But I think in marketing, this is crucial. Yes, you're going to use algorithms to do your job, but the job isn't the algorithm. The job is the human. So let me summarize the key takeaways and we'll take some questions. So point number one, technology is exponential, but humans are not. Humans are not algorithms, humans are not machines. The key question for us will be in the future. If technology can do something that was the old question, the new question is why are we doing this and what does it do for people? Data is a new oil and AI is the electricity. And clearly we're going to expect a lot of regulation here. Doesn't really matter where, because this is a powerful drive off, pretty much everything. So the question of data mining, moral aware, leadership and digital ethics, this is not just an afterthought, like 20 years ago we talked about green energy and renewable energy and all these things, they were kind of like nice to have, right? As Bertolt Brecht once said, dinner first, then morals. That is changing. That's just the part of our job now. So that's the number one challenge opportunity. I think we have to focus on the magic, ban the toxic, take leadership in this regard, make it work so people feel comfortable about this. The end of routine, very important for us to notice. I think do an exercise and write down all the jobs that are your routine and give them away to machines. Routine jobs are jobs that don't require human thinking. They are monkey work essentially. So a financial advisor that just doesn't do anything into balanced portfolios based on data, on that lowest level, machine can do that. On a higher level, definitely not, right? I mean, a machine can write a non-disclosure agreement, but can it be a lawyer? That's unlikely. So rising above that, it's very important. If we have kids, teach them the EQ. That's the only thing that we have left in the future because computers will have an IQ of a million, just 10 years away. Finally, machines don't do relationships. Efficiency is for robots. Relationships is what we build. So I wanna end with a key term from my book, which we'll have later for you to get, embrace technology, but don't become it. I think that's a very good path for our future because I think this is how we're going to find what we will be in the future. I wanna thank you for listening and I'm ready for some questions. Thanks very much. So fire away with your questions. The first question gets a free book. You can also have a comment or, you don't have to agree with what I said. Maybe I've answered all the questions. Have I answered all the questions? I'm here to claim my book. Okay, good. One of the human intelligence domains that you had a few slides ago was that machines couldn't, let's say, participate in was consciousness. How do you define consciousness? Well, we'll take about two hours to explain that. Well, that's where this way, I think, if you're looking at what machines are very good at today, right, very narrow jobs like driving a car, analyzing data, but very vast, right? So machines can look at a trillion data feeds about live energy grid in San Diego and say, okay, something is wrong here, we'll have to do something. Humans can't do that. But what humans do is they have a really wide angle of looking at things, right? So we can take multiple input, unstructured, put it all together and then create some sort of sense out of it. We can detect things that haven't been said. When we talk to a person that's much more important about what they don't say than what they do say. That's generally true for humans. We also have a way of deducting things and we can increase our capacity to increase our capacity. We can learn things to apply to learn again. Machines can't do those things. Machines can look at all this data and say, yes, I figured out a new way of doing it, but it's still going to be based on the machine learning process. Consciousness, in my view, is the capacity of bringing those things together in a way that can't be redefined, can't be reprogrammed, can't be restructured. I think this whole idea of what we are as humans, you know, sentient, call that sentience or awareness, that is the fundamental difference to a machine. A machine does not exist. We exist. So existence is based on emotions, capacity, understanding, biology, the body. Psychologists say that basically humans don't think with the brain, we think with the body. It might come surprising to you, but we're actually a lot more than just this. And this is very, I think this is a very deep conversation. It's addressed in my book, actually, a big part of this, so you should read it. But I think ultimately it may be possible for a computer to simulate consciousness, to simulate human beings. But a simulation is not the same than a being, right, in my point of view. So we may get then 100 years that we have machines that can simulate humans. I think for the foreseeable future, however, there's a vast difference here. And this is why computers and machines are tools. And humans are not tools, if I can explain that simply. Go on for a couple hours and have the next question. Thank you. You're throwing an easy one at me right there. Okay. All right here, I have a question regarding disruption. What do you think is gonna be... Where are you? Right here, I'll say it. Blue. Okay, sorry. What do you think is gonna be one of the next major disruptions and how as marketers can we prepare for that? Yeah, I think the major disruption is that, in many ways, I mean, if you're looking at the history of digital marketing, the first iterations of the internet, was primarily about tracking, looking at things, defining data, responding, you know, all the things that are performance-based that we got to work pretty well. And they're based on this concept of information flow and responding to it. Now, the future of digital marketing is gonna be entirely different because we'll be able to actually connect in a meaningful way rather than in a tracking kind of way. And this is what we're currently reinventing. Content marketing is the first step in this direction. So we're creating value that people want to see rather than that they have to see. And if we look at media, for example, most media was based on some form of interruption. Advertising on television was an interruption. But how do we go from interruption to engagement? Like, you know, if there's no more traffic signs and traffic lights, people aren't stopping, will we still have outdoor billboards? Or is the billboard going to be in the car? And would it then tell me a story that I want to hear? So I mean, that's a very big question. I think fundamentally speaking, we have to be very careful about abusing technology to create better mouse traps. That has worked amazingly enough. Has worked, but it will not work in the future. Because circumventing the mouse trap is now becoming completely easy. You know, do not track browser buttons. All the apps that prevent tracking. 34%, I think, in the US. On mobile it's like 50%. We have to reinvent this, right? Building a better mouse trap is not the answer. So how that will look like, well, I think we're still at the very beginning of this. So basically it's like this, you know, the old way of advertising marketing kind of goes like this. And if you're lucky, it will grow up a little bit. And then there's new ways of doing things. And they will grow in parallel. And that is going to be the hard thing for us to figure out. I think we're about 10 years away from that ultimate conclusion to where this old way of marketing is subsiding. Other questions? Thank you for your inspirational talk. I have a question for you about AI. I was at the Code Conference in Los Angeles last year and Elon Musk came on quite late as a guest speaker. And during his Q&A, he expressed real disquiet about one technology company in particular, and it's advances in AI. And we weren't sure, a lot of people were talking about it, whether who was he talking about? It could have been Google, it could have been Facebook, it could have been Amazon, it could have been Tencent, it could have been Alibaba. Do you have that same concern about AI being mishandled? Do you have any sense of disquiet because he certainly was very vociferous, that in the wrong hands, and it is being developed in one technology company that he considers to be very much the wrong hands? Yeah, well, of course, I wouldn't want to disagree with Elon Musk. He's like the Silicon Valley emperor now. But, you know, Stephen Hawkins said the same thing. And I think basically what is happening is that at this point, most artificial intelligence is not intelligent by human definition. It is providing assistance to us, like scheduling a meeting on Slack, or figuring out your schedule, or getting a proposal for an airline trip. But these so-called AI as dumb as a toaster, compared to what we do in an instant. However, they can replace things that are robotic, like driving a car, as long as they are robotic. Decision-making, creativity, foresight, you know, things that make us not machines. In my view, that's very far away. The biggest problem is that we may be looking at technology like this and we give them too much authority. Like now there's a software, I think it's in Florida, that is deciding on probation. So if you're in jail, the software, the video camera watches you the whole time, the software reads your face and your, you know, whatever you're doing in the cell. And then when it's time for probation, the software says, well, I've observed you for 4,000 hours and you are weird, you shouldn't be going. Is that something that we want software to do, or should that really be a judge? Or should the judge use the software? I think this whole idea of robots and AI dominating us, you know, that is possible, but it's pretty far away and it's only possible if we allow it. I mean, it's kind of like nuclear power, right? I mean, we invented the nuclear bomb, nuclear fusion, and we had two bombs, unfortunately, but after that we've managed to figure out how to keep it safe, right? So we can make a power plant. Most countries are not allowed to have their own bombs, you know, lots of discussion, but so far so good, right? We're going to have to do the same thing with this, right? With technology, there's no way to go back and say, well, let's undo it, you know? I mean, we're inventing intelligent machines and we can certainly use them to clean up our environmental problems, to take care of water supply, to do all kinds of things like cancer diagnostics and so on, right? But at a certain point we're going to have to administer some sort of regulation around them and keep it safe, right? So the primary answer is that technology is here, it's coming, 90% of it will be just intelligent assistance, will be extremely useful for us. It will cost jobs. And so we have to be prepared for this, all routine jobs will be taken by machines. And that means moving above the routine, reinventing our jobs. You know, there's great research showing that about 70% of all new jobs in 15 years has not even been invented yet. They don't even exist. Our kids are going to have to invent their own jobs. And 50% of that will be the gig economy, not fixed jobs like ours. Is that a bad thing? Well, I think it could be if we don't have the right social structure, you know? But it's not the end of the world. So I think it's primarily positive. And imagine this, I mean, there's a lot of research saying that because of technology, we may end up working two or three hours a day and make the same money. Because we can have agents doing the work for us. We just do the work that they can't do. I mean, that will be kind of utopian, right? Let's make it 20 minutes, right? For a day. In any case, I'm much more positive on this. I think we do need governance on artificial general intelligence and military use of it, clearly. I'm quite worried about that too. But I think that is to a very large degree just like nuclear weapons, you know? It was not a good thing, but we found a way to regulate. And that's something we have to work on. So I would say it's clearly a possible danger, but I'm not with Elon that we have to go to Mars because the robots will get us otherwise. Maybe that's not what he said, but anybody else? Another question, comment. Okay, I'm back here to your left. Hi. I saw you. Okay, this may be a kind of random question and I'm self-proclaimed like conspiracy theorist, but what are your thoughts on potentially using AI and computers in a positive way for a political stance going forward? So obviously with the Cambridge Analytica stuff, like that was bad, we don't like that, but we also have a lot of issues with our politicians and is there a good way that they can pull us or gather data or maybe your analogy of the prison cameras watching the guards, can we do that to our politicians and vote based off of that? Like, is there some positive way that we can start integrating that? Yeah, that's a great question. Maybe we can replace the politicians with AI. That would be very cheap and they couldn't Twitter, but maybe they would Twitter, I don't know. I think the general answer is first, technology can always be good or bad. That's not new. So the way that technology is used for manipulation, for deception, for fakeness, for things that aren't good, that's also not new, it's just more advanced, that's more elaborate, right? So what we have to do is we have to say, well, we have to build in mechanisms that we can prevent most of it, right? And here's the challenge, you know, technology is going exponentially into the sky. So basically 30x up the exponential scale is a billion. So to keep up with what technology can do, we're going to need to have pretty strong frameworks about what it's supposed to do. And can technology be used to enhance democracy? Absolutely. Information flow, electronic voting, digital money, e-government as they have in Estonia, for example. Lots and lots of really good things. The thing is that we come out of the school of technology being a tool that didn't really work so well, so we could use it in some way that was maybe not entirely right, but now it's going to work exceedingly well, which means the power level just goes up, right? So companies that have this kind of technology, they're going to be held responsible. It's as simple as that. Because, you know, think about it, if we hadn't regulated the oil companies, we'd be having drilling all the way up and down the California coast. You may be for that, I don't know, but clearly that's an issue. When you have this much power, you have to have a counterweight. And that is government, people, regulation, politics, society, whatever you want to call it. Nobody likes regulation. But clearly we're going to have to figure out how to keep the balance. And as far as technology is concerned, I'm not so worried about this. I think I'm an optimist here, is I think 90% of what we're seeing right now is extremely positive, can solve very large problems. Imagine, for example, if we can figure out the human genome, we can solve cancer. We can solve diabetes. Can we use the same technology to build super soldiers? Yes. But then we have to figure out how we do one, but not the other. It's not necessarily new, but complex. Now, in your term, I would say, we're looking at the complete reset what marketing actually means in this context. And we've got five years to figure this out. Away from the Maus trap building to a vehicle of engagement. And I think that's what technology enables us to do, building real engagements. Not fake engagements. So I think we're pretty much out of time. The clock is ticking, but one more question, or what's the deal? Anybody? Okay. That's the one more question. I think it's, I was relieved to hear that you feel that humans aren't algorithms, because a lot of what I've been hearing and reading, like from Yuval Harari and out of Silicon Valley, they believe that humans are algorithms. And I think that they've reached that conclusion because it makes the ethics around technology much easier. And so my question is in this sort of post-trust world where humans aren't algorithms, who should be responsible for establishing the ethics? Is it the government or who else should we look to to help establish that? And is there a role for brands in establishing those ethics? Great question. I think the brands are actually in a good position now because many people have lost trust in government and in banks, of course, anyway, and also in technology to some degree. But that's really varies deeply from country to country and also in age group. Generally speaking for brands, this is the number one issue is how do we generate trust and how do we keep it? Because trust isn't digital. I mean, you don't click on a mouse and say, now I trust you. You trust somebody and then you click on the mouse and your trust is approved or increased or violated. And that is the hard part because trust is a human thing. Trust is not a download. I like to say jokingly, happiness is not an app. I mean, the way that we think about companies or people is not based on information flow. The opposite is not true, of course. Information flow can change that. But who's responsible for that? I think it's the brands, of course, that have a great opportunity to say that we are going to try to create human value. What's called collective flourishing, you could say. Not just sell as much as we can and people have tried in Unilever, Patagonia. I think every company is going to be like Patagonia in the future, as far as that is concerned, right? I mean, clearly that's a winning proposition. But on the other hand, this concept that we can do that without government is an illusion. We need the government to balance the power of technology and science with the power of the people. That's what the government is supposed to do. Without that, I mean, think about that for a second. What is your choice about Facebook? You can quit Facebook. You can stop advertising, right? Or you can say, I need Facebook because that's where my clients are. But what other choices do you have? When Facebook is like infrastructure, Google is like infrastructure. It's like air, like breathing, right? So what choice do we have? This is the choice of government to say that we have to find a framework that works for all of us. I think that is currently a tough story in the US, of course. I mean, in Europe, we have the European Commission as much as you may hate them for bureaucracy. They're trying to figure that out. So bottom line is, I think, exponential technological change will make the world a really, really far-off place in 20 years. And it could be heaven if we find the right context. I think it will be. But we have to make it self. We have to actually put the things in place, you know, the mechanisms, the control, and the trust. This is a huge opportunity for brands. I think it was the CEO of Walmart who said that the more things go digital, the more important the trust becomes as a factor of decision-making. I think it's really important to keep that in mind. As we go more digital, it's not that humans matter less. Is there matter more? And that is the hard thing to figure out, you know, how do we actually find that piece? Thanks for the question. I think we do have to stop, right? Is that correct? Anybody? Who's the authority here? Okay, your mission control. Finish, we're finished. Okay, so you can get my book after the break or in the break. I'll be out there signing them. Thanks very much for listening.