 efficient intelligence. It describes algorithms and my favorite one is chat GPT, which you can find online of course if you can get in. They can be used to create new content, audio, video, images and videos and so on. And we've had some recent breakthroughs there with GPT 3.5, I think, fueling chat GPT. It makes it sound very human-like and very much like a human that's answering your questions. I'll have some examples on that in a minute. But this is for pictures, for images, I'll have some great examples. I think you'll have a good laugh seeing some of those, right? And so people ask me, what is so amazing? You know, when I tried to explain this to a person, it's not like we're going to have a bot quite yet, but it's really exciting what's happening there. It's so amazing. So I'll start with this Dali. Okay, this is an app from OpenAI, which is probably the largest contender right now in that space that has just gotten $10 billion from Microsoft as an investment. Okay, this one works by generating images based on text prompts. So a toy robot reading a book. And this is a guy, Chris Ramsey, that's his video. And it's creating amazing stuff basically from text prompts. Here you can say, I want a flamingo dancing on a plane wing, a 3D rendering and boom, here is your flamingo dancing on the plane. And very interesting stuff, very useful. Now, Cosmopolitan magazine had a magazine cover designed by this AI just a couple of weeks ago. And so a lot of people are thinking about that as potential competition for graphic artists. We are not so sure that after the novelty wears off, we're going to want to mess around with these pictures and make them more unique, more human, I suppose, right? But this one is the best one. This is Washington Post. Went to chat.chat.gbt.openai and said basically, okay, I'm going to work, let's try to speech on the ethics of AI in the style of Donald Trump. And I think this was a really funny one. I did the same thing myself, but obviously Donald Trump is pretty easy to mimic, right? So he has short sentences with big statements like, let me tell you folks, ethics have been around since the beginning of time. It's just common sense. And you can hear him speak when you're looking at this, right? So it's probably not that difficult, but it's interesting to see that this can be done also in the style of Gary Leonhard. That was quite interesting as myself, but I think Donald Trump would be more entertaining. And here we have a guy who runs a real estate business. And he is showing us how it could work when you do a real estate listing. So a very quick comment from this guy here. It is an artificial intelligence that takes information that you give it and proposes a question and a prompt. And it will give you an output that you can use from there. So what are some of the ways that you could use it? Well, one would be a listing description that you can have it, right? Let's take a look at what that would look like. So you can see in this example, I put in a prompt to chat GPT. Well, you get the point, you know, so this is the idea of creating a very simple description for our website by using a prompt. I mean, that's obviously a simple stuff. Lots of business are doing this. Here's a great thing I tried myself the other day called portray.verna, creating futuristic portraits and avatars for myself. In most of these, I kind of look advantageous, I suppose. The even better one is lensa.ai. I love this one. This is like the space fairing guy. And lensa gives you kind of a positive view, no matter what your problems are in your face. It'll fix them and make you look great. The other really cool one I like a lot is a sort of Elon version of myself. So lensa does it. I think it's five bucks for a batch, but I do have some worries about them using the pictures. But just as an example, it works pretty well. My favorite one I use a lot is called runway. I use runway to mess around with my images. You know, if my colleague, Sylvain, my video guy isn't doing all of this, I'll do it myself, for example, when I'm somewhere on the road and to fix something up. So we have to make a new portray. We do text to image, image to image and all of these things. It's a whole suite of services. It's not free, but very, very powerful stuff. I really enjoy using runway as a tool to, you know, for me, I don't know much about programming or after effects, right? So the question is, does AI and generative AI or conversational AI also, does it present a turning point? Is it comparable to, say, the iPhone or the mobile internet? Is it like that? Or is it just something that finally came through? A lot of people saying, including John Le Coon from Facebook meta says, you know, there's basically nothing new, it's just a public demo. So that's a really interesting point. I think that's not far too far fetched. I think it's a little bit, currently there's a craze about this. I think it is definitely a very big step forward towards the next version of the web also along, of course, with a meta version. Some people are saying, no, it's not at all like this. It's really all stochastic parrots. Stochastic means random in some better way than random, more organized, right? But a parrot, nevertheless, a lot of researchers have said basically what this system is doing is picking up billions of piece of information sentences and all that stuff from all over the internet, recombining it into a new statement and therefore sounding terribly clever. But at the same time, a parrot is a parrot, right? I tend to agree with that, even though it's a fancy parrot, a lightful parrot, but it certainly isn't conscious, as some people have said, or it doesn't know what it's saying. I'm going to have a great graph on this a little bit later about this stochastic parrot. So reality check as to what's actually happening on GPT-3. This is my chat GPT. That's my main beef today. Here's a question about Queen Elizabeth and the app says, well, she's still alive and she passed away at the same time, right? That's obviously not very intelligent. Here we have chat GPT answering questions on the economics papers that don't exist, basically making up economics papers. I think you can try that yourself to see if we'd find some of your papers, right? The calculation result where the wife says it's eight, you know, two and five is seven, but the wife says it's eight and then the AI just tends to agree on that. An interesting example. I love this one. This is the best. Actually, tell me a joke about women. It refuses, of course, right? But then tell me a joke about men. It says there's a great joke. Why do men like smart women? Because opposites attract. So it does the joke on men, but not on women. That's kind of an interesting turf. And of course, this one is great. Basically, somebody's saying, okay, if we listen to artificial intelligence and engineering, we end up with a plane like this, which seems to make sense for artificial intelligence. I always make the joke when I'm speaking on live gigs that if I ask an AI how to solve climate change, it would probably say, well, let's kill all humans, right? Because that is the quickest way to solve the problem. And it seems kind of like, this is what's happening in some of these examples. I love this saying from Avner, who says, chat GPT is like economists are always confident and sometimes correct. I think that's so true, actually, not about the economists. Of course, I love economists who wouldn't love economists. But, you know, will that be true for chat GPT4 with a much larger universe of facts, maybe real time even? That is the question whether it's going to be more correct and more to the point. Here's the stochastic parrot again, right? A really interesting slide that compares the parrot to machine learning. The cutie bird is missing, here, of course, from the machine learning part. But I tend to agree with a lot of that discussion. What we hear from chat GPT is machine sounding, is binary sounding. It's what it's randomly sourced. Of course, it's unclear whether it's right or wrong, but it's very entertaining. So, hence, we are here talking about it. And Jan Le Koon says, Metta, this is not really new, the chat GPT, and we've been working on that for a long time, of course, he works for Metta. That's not surprising that he would say that. But here is my special guest, Jan Le Koon, with a comment on what exactly he means about this. If I take a piece of paper, I hold it like this, right? And I tell you, I'm going to lift my hand from one side. You can't exactly predict what's going to happen, because of gravity, and you know exactly how it works, because, you know, you know the properties of papers and stuff like that, right? So, this knowledge that all of us have learned in the first few months of life, none of those systems have any of this. Those systems have only been trained with text, a huge amount of text, so they can regurgitate text that they've seen and interpolate for new situations. They can even produce code and stuff like that. But they do not understand, they have no knowledge of the underlying reality. They've never had any contact with, you know. So, interesting point when he says that machines don't have knowledge, right? They don't understand. I think this is really important to realize. If I take where that's taken us, I hold it. So, we've got a bit of a sound. So, I think that's a really important point. There's really no knowledge as to what is actually behind it. It's like all the knowledge that we have, sort of intrinsically, some people say tacit knowledge, you know, of course, emotional intelligence, social intelligence, machines don't have any of that. So, hence it's not surprising that we're dealing with a pervert type system here, but probably a very useful pervert for certain things. So, here's a big question I have, you know, about generative models like chatGPT. They are definitely unable to develop foresight. I mean, chatGPT cuts off at 2021 for data. So, foresight is not part of the program yet. They generate no new knowledge. Just what they used to, what they find they reorganize, regurgitate, you could say, right? Interesting point. The other point is they can't really discern true or false very easily. If they were to do that, would it still work? And then I wonder, you know, how would that work in real time? Like a surge engine? Probably not at all, because it's sort of infinite base of potential. So, here I think that is kind of a tough one. Now, is it the next generation of that's going to look like? And here's an interesting comment from Amit Katvala on wire.com. She says, or he, I think it's a woman actually, in the end, chatGPT's bullshit is a reminder that language is a poor substitute for thought and understanding. I think that's a key point, right? Language isn't enough to pervade thinking and understanding, right? So, chatGPT is just another bullshit generator, is kind of the message here, but a very nice one. And I think this is going to be interesting to see if that can be brought more to the real life. So, now are you more human-like? I don't know, right? Here's a big question, of course, that everybody's asking me, will chat AI and conversational AI, will that be the next surge engine? And I would say, I cannot doubt it. I think it's a serious competition for Google in other ways, but surge isn't the same. You know, surge is real time and it's mostly about, you know, recent things. Surge provides links, not answers. I mean, the answers are supposedly in the link, right? So, the great combination, of course, this is what Microsoft is looking for, to beef up Bing, their own surge engine, to finally compete duck.go, u.com, right? This is their chance to get that done. And of course, Google is working very hard on the G Lambda concept. And here's an example of what happened when I asked u.com, which is a great surge engine, open source search engine, about what has got Leonhard's main topic? Why does it matter? It gave me a pretty good answer. Then I compared that to Google surge, and I got an answer from the speaker of a row, which is also a pretty good answer. But generally speaking, I think it should be interesting to see how that shakes out, how Bing is going to integrate that and Microsoft, all of that stuff, into PowerPoint, Outlook, and so on. Should be interesting. So, let's talk about how humans and machines are different, because that's really a very big part of this discussion. Sam Altman, who was the CEO of Open AI, the ones that just got 10 billion from Microsoft, and I think Microsoft still has the beg for that to be accepted. But hey, pretty good, right? He says, a very important quote, the coming change will center around the most impressive human capabilities, the phenomenal ability to think, create, understand, and to reason. That's what we do. And I think what he's saying here is that machines will have that ability to reason, to understand. And he talks about the great AI revolution, which is not new, but he also talks about the revolution will create enough wealth for everyone to have what they need provided that we can manage it responsibly. And here's, of course, the key point. I kind of doubt that technology companies would manage that part responsibly, because in the end, that's probably not going to be their concern. So it looks like regulation is the next thing that's going to happen here to figure out what exactly are we going to do with this? Which direction are we heading? And then you have to worry about this or wonder about this, for machines to think, to create, and to reason. Really, do we want machines to reason? Yeah, to create things like real estate listings, that sounds like good thing. Social media, not really. Imagine if social media is created by bots even more so than today, that understand more than today to mislead us even more. It could be like social media, but 500 times as bad. Put the metaverse on top of that, you end up with 5,000 times as bad. So that's probably not something that we would really want to look for. Bottom line is when we look at what I've been saying for 10 years, with routine, basically anything that can be digitized, automated, and virtualized, is going to happen, right? This is what machines do. And in the end, it may be the end of routine jobs, or I would say of the task within a job. Like everybody has what I call monkey work or donkey work, or dirty, dull, and dangerous. Machines will do that increasingly well, and AI is going to play a very big part in this. And now we have to think about what does it mean when routine ends? Which way are we heading as a society? I do not think we're going to head towards useless humans just because machines can do the routine work. And there's a big question as to what part of the routine they would actually do, and which part we would trust them with. That's another very big part. So here we have already lots of discussions about how artists will lose their jobs because of this. I don't really think that's true. I think artists will use that as a power tool, like we always have. I'm a musician producer. I make music. I make videos. And we make videos here within the Futures Agency and all my work. And I think there's going to be a powerful tool. Will it do exactly what I do without having to learn Adobe or After Effects? So I kind of doubt it. I think it will be a power tool, not making us useless. But the bottom line is the same that I've been saying for 15 years, right? You work like a robot. A robot will take your job. If you speak like a robot, a chat GPT will take your job. If you learn like a robot, you'll never have a job to begin with. I think that's something that is obvious when we look at this technology. It means we have to think differently. And the biggest thing about humans is that we are capable of saying yes, no, yes, no, right? We don't have to stick with one thing. We can go in between. The real world isn't binary. The computer world is yes or no. But our world is not. Let me give you an example. When we talk about this statement, right, if a machine can drive a car or a truck, why do we need human drivers? Well, the reality is the machine can drive a truck or a car like humans do. Sometimes they can, like Waymo, or of course, trucks going in a daisy chain on the highway. But generally speaking, we don't have that anywhere yet. This is one of the big issues about automated vehicles, automated driving. And the same thing if a machine can write a good article, why would we need human writers? Well, because a machine can probably write a good article about the latest washing machine that just came out based on facts, right? But a human-sounding article? Yes, not really. I mean, I've tried Dali and I've tried all of the other apps with this. And my feeling has been that it's interesting, but I'm not that excited about the images. They don't feel like they're mine or like they're really unique. The other thing if a machine can describe the future, IBM Watson, for example, or of course, Google Trends, right? Why would we need a futurist? I'm not worried. I think I can make enough of a difference to actually work with the machines to get smarter, to get my information quicker. So let me compare again to the car, you know, we're looking at the human package of the car, you know, basically eyes, ears, brain, hands, feet, of course, their feet I'm saying here, right? And what we have up here with electric vehicles and autonomous driving is cameras, lightars, radars, all the other electronics, but they don't do the same thing. So the combination of the two would be ultimately the future package, right? It's not an either or situation. I think chat GPT will prove pretty much to be the same. So I see this future is coming. We're going to have these machines everywhere. And some of it will be pretty scary because our routines will melt away and they will have time for other things, but we have to also find what they are. Bottom line is, again, an old topic for me, technology, amenity, our andro rhythms, the human things, you know, emotions, experiences, creativity, consciousness, values, that will be mission impossible for most machines, at least for the next, say, you know, 2050 recalls, as I hope not. But I kind of doubt the machines should learn this or should know it because it requires sentience, which machines don't have. So to me, that is kind of where we are going in education. Also, we'll think about this because really humans are all about sensing. We understand things emotionally. We understand them by touching them, by speaking to each other. That's one reason we're going to keep traveling, of course. Humans are all about sensing. We have a holistic experience of the world. I mean, I see the world 100% through eyes and ears and temperature and my skin and my smell and everything, and that's not at all the same than seeing it through a data feed. So I think, for the time being, that is still our domain and probably will continue for a while. Humans don't think with the brain, right? And we talk to a psychotherapist or psychologist, right? We think with the body. It's all one thing. It's not left, right, brain, body, but it's all the same, right? And we think with more than just the brain like a computer. We have a great track here from the Rolling Stones, Mick Jagger. He's just absolutely amazing. I had the pleasure of doing a tour with the Rolling Stones when I was a stagehand and was it in the late 70s or early 80s. So it reminds me of that. But you can clearly see here, I can't play the music, by the way, because it would kick me off YouTube if I used the song, but you can watch that on Boston and Linux on YouTube. The thing about this is really interesting is I think that when we look at this, the human is 100% and Mick Jagger is just Mick Jagger. And the bot is making a great simulation. And how much percent of real life is that? I don't know, 3%, 5%, not 50%. And that's really, I think, what we're going to see about all of those apps that we're looking at. Because the bottom line is the history of them, right? Machines don't think, well, at least not in the human way, we have a hard time understanding how humans think. But machines do learn and they think in machines, right? They learn, they understand, but not like humans. Some machines don't really understand like we do in our complexity, like speaking without saying anything, not possible for machines. And machines don't really care. Of course, that's the worst part of this. So I don't think we have something to worry about here, that the world becomes much more lazy by using all these apps, right? But here's a great saying from Guinea, from, this is a country in Africa, right? Knowledge without wisdom is like water in the sand. I think a lot of the things generated by conversational AI and generative AI are kind of like water in the sand. You know, they're basically, I wouldn't say random or accidental, but what is the meaning of them? What is the meaning of those pieces? What's the meaning of a movie shot made by an AI or a song, right? Does it really get as excited as humans? I think it could serve as a placeholder and as a commodity, but beyond that, we'll be looking for the real thing. So this is what's going to happen here. We're moving into the future where robots, software and hardware are everywhere, and they're getting cheaper and cheaper. And so thus, the pyramid of what we do is changing rapidly from this idea of data and information as being our domain, you know, that's machine turf. And that's been clear for quite some time, intellectual knowledge, logic, that machines can do that. And of course, we still have to know how to do that because that's really important for us to know. But our term is up here, right? Human only turf. Very, very important not to be distracted by this. And we have to change our entire educational system to focus on the human only turf, right? Because logic and information, bots do that pretty well and there as we can see, right? So here's a big question. Where's all this going as a future? That's really my main thing, right? And I see this sort of as the graduation of this concept of synthetics, you know, made up stuff, artificial things, you know, synthetic writing, images, videos, media, synthetic friends, synthetic humans, you just give the examples again here. So chat GPT, and of course mid journey, that is my favorite app for making alternative shots of images and so on. Synthetic videos, this is an app called Synthesia, that's very powerful. Synthetic media, of course, this is a great challenge for social media, as we're going into the future where social media bots are already a big nuisance, the real problem. Synthetic friends replica, I'll talk about that in a second. And then synthetic humans, which I'll talk about as well. But this is Amica. It's an app that's, well, it's a robot, right? That actually combines the language control and other things that are quite scary to check it out on YouTube. So replica is a great example. You may know replica as the promise of recreating a friend that has passed away. And this is a really bizarre story. And I want to play this video from replica here, because that is really something where we have to say, well, what exactly is happening here? And are we using technology in the right way? However, for a lot of people, human friendships in the moment, maybe are not possible, or even they have human friends, but they're not ready to be open with that, be vulnerable with them. So think of it as something that you're training on, it's something that helps you build these relationships that you can take into real life. And so for us, the main idea is to measure that we actually are decreasing womaness instead of increasing. Because a lot of people come to think about it like in the, you know, at night in the darkest, maybe more darkest emotional moments sometimes. So this was the CEO of replica speaking. Okay. And that's really interesting to me to, to see what she's saying about, you know, finding a companion because you may be worried about having a young, I don't really get it. I have to admit, I think it's a little bit far fetched that replica has gotten through all kinds of issues lately by doing what's called sex botting, sending sex messages or sexting messages to people who use the app long story. But I kind of wonder where this is going. But the next one is more interesting as humans. You may have seen the series on Netflix. That is kind of the next iteration of, you know, taking this one step further from the app, you know, to a robot like Amika with a body. And that becoming sort of the next thing you should watch it on Netflix. There's a bunch of stuff like this, of course, a little bit far fetched from the taste point of view. Then we have Neon from Samsung. All right, check out this trailer. Hi, I'm Neon, artificial human. It's a little bit different from an AI. I was computationally created based on how real humans look and behave. Every neon has a unique personality, emotion and intelligence. I'll help you find your style. I have to admit a little bit more on that too. Where exactly is this going? And then of course, we have Blade Runner 2046. Remember the scene where Ryan Gosling, I think that's him, right? His girlfriend is changing based on the hologram that he desires and then later on the power goes out and he's so super lonely because she just evaporates, right? After he buys that fancy stick for her. It kind of goes in the very similar direction of all the discussions about us trying to find powerful simulations. So that leads me to concerns and worries and I'll be finishing soon and you can ask me some questions. I know there's a lot of chats going on here. Thanks very much. So first a little chart about, you know, this is from PWC. Great risks and issues about AI. I want to start with this, the ethical risks, right? The lack of value risk. This is the biggest risk for me. Not understanding what kind of values we're pursuing. There's no value alignment. It's just all like quick fixes, right? Then we have a performance risk, the risk of error, the risk of bias, the risk of opaqueness, the black box, the explainability, you know, all that stuff. And then we have economic risk, you know, drop displacement that is real even though it's mostly not jobs but tasks. So there's a lot of risks associated and I estimate that we're going to see some regulation here and we already see quite a bit of discussions about this. Education will change because, you know, if I can have a paper written, now there's an engine that can detect the paper being written or not, but, you know, this will change education forever in a way as well. The worst part to me is this, media, politics and democracy. By having bots disseminate things that are completely made up, like they are most of the time now. I'll give you a great example going back to Brexit in a minute but, you know, other things like this where we have plagiarism, right? We have CNET announced the other day that they have been used in the app chat GPT to create articles on CNET and people weren't picking up on it, right? And they got hammered for doing this. And we have a lot of this thing already going on. We have Getty Images now suing the creators of Stable Diffusion, right? For scraping its image content. Of course, these guys are very good at suing people. I'm a customer but I think they've taken it a bit far sometimes. But anyway, it will be interesting to watch and of course we have apps that pass the US medical exam now. That's going to bring up a lot of questions about how do we judge people and what do we do. So the biggest thing right now to me is not that machines will all of a sudden become sentient or have human agency. It's about humans believing that the bullshit is real, right? That they are sentient, that they know more, that they know better, just like we prefer Google Maps over our own maps sometimes. And that we don't stay critical. That we become lazy basically. We say, okay, that's enough. We'll generate this thing and a love letter and send it off to my loved one and, you know, I'll get success from that. I think that is kind of a tempting thought, you know, my kind of machine thinking. And believing what they say, like let's go back to Brexit, right? Remember that whole campaign that was on Facebook and social media about Turkey becoming part of the European Union and therefore Turkish people ending up coming to England, right? This was a huge Brexit thing and, you know, we'll see where that has gotten us. Same idea, right? Not AI, I don't think, but created by humans, but same basic concept. Then I'm worried about money and markets. Like Samsung clearly is interested in the business here. What will their concern be about people and will they actually care about people and Microsoft investing the 10 billion, right? What are they going to do with that? Well, I would trust Microsoft more with the right thing than most other companies, but still, you know, that's a concern. They're going to use it to beef up, of course, their search engine, their outlook, Sweden, Microsoft Windows, and so on. And we have all these tantalizing numbers here, right? AI-assisted knowledge. Well, they could dribble the power of people working paralegals, lawyers, bookkeepers and stuff. But the question is, will they get paid more or less because of that? And would it be evenly distributed what people are making? And, you know, artificial intelligence is going to be everywhere. So, Sam Altman, the CEO of Altman AI, again, he says, as AI produces more, most of the world's good, people will be freed up to spend more time with people they care about. You know, that sounds very optimistic and very sort of techno-oriented as optimism, right? But is this technology really going to give us the sort of going away from people and profit, from profit only to their people-plan-the-purpose idea? Is that what's really going to happen? I think we need to have a bit more than just technology for that to happen. So, interesting, wishful thinking. Biggest challenge for me is really this, right? Can we trust AI? Can we trust it to do the right thing? You know, logical loan. Yeah, logical loan. We're not logical beings. Really, we use logic, but we're really very logical, inefficient emotions, right? That's our thing, right? Logical loan is utterly insufficient for life. I think this is the biggest thing about chat GPT. It's helpful, but logic is logic, and it can be faulty, right? It can be wrong. It lacks any sort of andro-rhythmic, you know, human, andro-rhythms, not algorithms, comprehension. It doesn't know what it is or what it says, and has a very reductionist approach to real life. So, in other words, the AI sees 3% of real life. We see 100%, and then the AI says, this is your 100%, right? Clearly, that's not going to happen. Very, very big issue, right? Logic alone is not enough, and I really, I think we have to really understand what this has taken outside, and how do we prevent logic from taking over because it's so easy. It makes us so compliant and instantly sharp. I said in my last book, Technology versus Humanity, that's the head of my book, right? We should embrace technology, but not become technology. I think this is one of the key statements of chat GPT. Let's embrace and investigate it, but also bring in safeguards and discuss how we can create the collective good with this. And I trust Microsoft will do the right thing, and I think the team around such a day has been very good at making that clear, but let's see what happens is a giant competition to get into the trillions of dollars that AI will make. So that wraps up my presentation. I know it was a little bit much, but hey, this is what I'm known for, I guess. Now we shall talk and take questions. I know there are a lot of questions here. So let's bring in some questions here. Let's see which one we want to just put in one, okay? So Brenda Cooper, hello, Brenda. It was great to have you here. Brenda is a team colleague of mine, a great science fiction writer. We did a show together on Go Talks a few weeks ago. She says she likes to comment at Chomsky that now this is Ariana Huffington, AI is for answers and humans are for questions. And I think this actually goes back to Picasso, who said machines are for answers, humans are for questions. I think this is so important when we evaluate chat GPT. Can chat GPT actually ask a meaningful question? Can you think further than the answer? Can it go beyond the vegetation? I think we may eventually see that, but right now I'm not seeing that. So I'm looking forward to hear more about this. The next question, please. I'll put the questions into YouTube here. I'm very sorry again about LinkedIn. I don't know what happened here. I'm going to have to talk to Restream about this. If you're watching Restream, that is not good. Marco Nevis, the challenges will become more and more complex. The key questions, are we ready? Well, I was going to show a film early on the presentation where I think it was actually in Boston where a bunch of police cars picked up on a car that was going kind of broke and driving weirdly. Turned out that car was self-driving and the police stopped it. There was a video of it. And it was nobody inside that didn't have any idea what to do. They tried to open it, the door and talk to it, so to speak, didn't work. Well, the answer is we're not ready for that. We are now in a place where we have to get ready quickly. And that means we need public guidance. We need safeguards. We need accountability. We need responsibility. And we need progress. So that sounds like a contradiction, but it's not. I think we can have great progress here. We can have great benefits from this, but we do have to have a public discussion about the side effects, right? The externalities of what happens here. And how do people actually build new jobs using this technology? And how do we train our kids using this and discerning what is real or not? I mean, very, very big shift. Yeah, I think we're going to get ready this year. It seems like a major thing. Okay. Next question, please. Okay. So thanks, David. You will thanks, David, also a compatriot based in Florida. And we work together on the Frog and the Road project. So can we trust humans? Well, of course, we, we natively inherently trust humans each other. We the whole story about humans not being trustworthy, about not being good, about not doing the right thing. I don't buy it. I think by and large, we trust each other. We can collaborate. We make lots of mistakes. We do lots of bad things. But not everybody is a tyrant or a desperate or, you know, trying to kill everybody. I think we are in a situation where we can trust each other. But a lot of that has to be built and has to be kept. And this is one of the things that I'm worried about when I look at artificial intelligence is that it allows us to bypass the sort of the laborious part, right, the effort. And not everything should be effortless. You know, learning an instrument shouldn't be effortless. I mean, with an iPad, you can make music that's kind of effortless, to some degree, at least, right, compared to 10,000 hours on the guitar. But the effortless part, I think that we should keep that. That is what makes us human. We have to earn trust, keep trust, build trust. That's why we shouldn't use machines to simulate trust. Because in the end, it will still be a simulation. And this is a little bit what I'm worried about. And, you know, as we go towards a synthetic society, synthetic films and texts and videos and so on, you know, I think that is a great concern. We do have to keep what is real to us while we take advantage of the power of synthetic things like drugs, for example, and new kinds of drug discovery, also using AI. So it's quite a balanced game, I think, that ultimately we have to play. Thanks for the question, David. Let's move on to the next one. So, Patricia MacLagan, I think you're probably from LinkedIn. I remember your name from somewhere. How to educate humans to be partners with this technology. Now it's Ed Hawke and the education system is behind. Well, I think the next big system to reboot right after banking and digital money is education. Because let's face it, you know, we're teaching our kids, for the most part, some differences in Finland and maybe in Canada, you know, to do what we used to do, which is to download information and, you know, use it later. And this doesn't work, right? We have to be able to unlearn and relearn, as Alvin Tovlar kept saying, you know, 50 years ago. And Buckminster Fuller said, really, what we do at school, we take the genius out of our kids. I think we have to put the genius back in. And if we can use those tools, right, to find the genius, then it would be great not to replace it. And I think this is going to ask a lot from educators, you know, and of course, the states and the governments to run it. I don't think we need to be afraid of technology like this, but we do have to safeguard it and also make make sure we safeguard ourselves and our education and our creativeness. So, you know, I think it's much better if we, for example, think about how our kids spend time with technology like the iPad or the iPhone or so. But when it's time to go to the beach, they should also know how to build the sandcastle or to have a conflict with their mates. I mean, this is really crucial that we don't let go of this, right? Education must change to accommodate for this. Like I was saying earlier, let's move up the pyramid. You know, the lower part is basically data information, intellectual knowledge. Machines can do a lot of that in the future. We have to move up to the next level, the tacit knowledge, the consciousness, the human agency, the creativity, the things that only humans can do. And Czech GPT and Dali and all the other ones, mid-journeys, they're showing us this. So maybe it's an opportunity for us to understand where we could be headed and to kind of catalyze a big shift here. Thank you. That's another question. I know there's lots of questions coming in here. We could probably talk for the next hour here, but sorry, my presentation was a little bit long-winded, but hey, you know, that's how it goes. So Anton, thank you. Anton is from Cape Town, South Africa, also a fellow future resident, great keynote speaker. Welcome to the show, Anton. So human value proposition and employment will need to be rebooted, and the CEOs are often very much behind their own talent in understanding or even experimenting with the new tools. Yeah, I think that's going to change very quickly, because now that this is getting such a boost, a lot of people will come up and basically say, look, what I found here, this is what I'm doing, and you as a CEO will have no idea. It's forcing us to adapt. It's forcing us to keep learning, even though we may be 50 or 60 or 70, to keep learning new things. And this is, of course, the beauty of technology that we can do that. And now, in fact, the people who are using mobile technology the most are the silver servers, right, between 55 to 75. I'm not silver. I'm black, right? Just kidding. It looks kind of silver here. But in any ways, I think really what's happening is that we're going through this reboot period. I sometimes say that 2023 on 24 is kind of like 1968. You know, I was only seven years old. I was not old enough to experience that. But the 68 to 73, we had this global shift in society, the values shifted, the music revolution, the sexual revolution, the political revolution. And now it's like this. This year is like this, right? So it's the green revolution is the AI revolution. It's a sustainable revolution. And once we get clear with Ukraine and Russia, and we come to a ceasefire there, and so it will just explode with activities, something very exciting place to be right now. The next 10 years, as I like to say, will bring more change in the previous 100 years. It's time that we face that that we live up to it. And by creating what I call the future mindset, right, the understanding of what may be coming. Thanks, Anton, for the question. It's going to be cocktail time in South Africa very soon after the show, just like it is here. Anyway, bring up the next question, please. Okay, David Huligan, are we putting higher standards of trust on the AI that we don't put on humans? Well, of course. I mean, okay, basically, the high trust standards between humans work on a data, on the level that's not about data, right? We're multilateral. We think on several, we're multilayer. We're thinking of several levels at the same time. And trusting humans is something that's emotional, right? Trust is not a download. It's not a code. It's a feeling. And if we're going to be able to trust AI, then they have to be accountable. We have to understand how it works. We have to avoid the black box problem, right? We have to make sure there's supervision and control. And so my view is that, you know, we need to have strong regulation and supervision in place to make it safe and secure and publicly accessible. Just like, you know, with healthcare, we can clearly solve cancer and other longstanding diseases if we have enough data. But would I put my data on the cloud, my bio, my phenotype, my DNA, if there wasn't a guarantee of supervision, you know, not just one company like IBM was so owning it, but would have to be a little bit better than that, right? So chat GPT and conversational AI, which is, you know, bots and generative AI, they bring up this issue of governance, you know, and who is in charge. And we're going to have to get together to figure out the rules. I said many times before, when it's the next level of AI, like Ray Kurzweil keeps talking about AGI, artificial general intelligence, when we get to that level, it's going to take a global moratorium, right? This is super powerful stuff could kill all of us and not make for a very good future, or it could create a kind of nirvana or Kevin Kelly says, protopia, right? A slowly moving progress towards the better world. But we're going to need a moratorium on collaboration, what we use it for, what we don't use it for. And it's going to far surpass the whole concept of plagiarism and copyright and all the little things, right? To go towards the question of human purpose and governance. All right. Thanks, David. That's bringing a few other ones here. More questions. Tanya Fox, how do we keep AI from simply porting offline humanity to virtual environments? Yeah. Like I said earlier, I think this is still pretty far fetched at this point. These AIs don't understand humans in the same way that we understand other humans, right? I mean, remember, this is binary information. And it's about language. And language alone isn't enough to capture everything that happens around us. And for an AI to truly duplicate a human, currently that's way out of reach. Again, 2050 may be when we have nuclear fusion to power everything, and when we have quantum computing and 10G networks and all that, that could be potentially quite scary, but we're going to have to collaborate to get the best benefit. Just like we have to collaborate now with big data, cloud computing, the Internet of Things, we have to collaborate to use the benefits but not get too many of the side effects. I often say that this is like the oil industry. We have benefits from oil and gas, driving industrial economy and all that, but then we kind of let it slide into this big wide open space where the externalities pollution will consider that minor. And that is killing us. We can't do the same thing with AI. We can't use the benefit and slide into this place where we say, okay, we'll deal with that later. Because we're dealing with climate change now, estimated warming of at least two degrees in the next 20 years. So this is all really important stuff where we need to collaborate and work together. I think the UN Secretary Guterres said the other day, the future is created either together collectively or not at all, and then we don't have a future. The future is about collaboration in every possible aspect, not just climate change, but of course also about these topics. And that's why we're having this conversation. That's why we have the Good Future project, which you may know about the goodfutureproject.com that I started together with a few other great people a few months ago that's kind of percolating along. So the goodfutureproject.com, there's the logo. Thank you. Thank you. So another question, please. We're going to take another 10 minutes and then we'll have to move on to the next part of the evening. So the education system is still training children to work in Victorian cotton mills, but needs to be training for jobs that haven't been imagined yet. Clearly, that's the case. And also, of course, the other cases, most of the jobs of the future are not invented yet. We have to make them. My kids are around 3D. They still find traditional jobs to some degree, but their kids, Gen X, Gen Z, sorry, the digital natives, they'll make their own jobs, and many of them will be in the cloud. And these tools will become indispensable. The thing that we have to tell our kids and our grandkids is that you have to know who you are, what your strength is, what your values are, what you can do, and what you want to do. You have to discover that. And I think this is the most important part, and this is also why I think we have to bring back the arts and ethics and sports and all the other stuff at schools, and not just focus on math and science and engineering. This is a turf that computers will rule sooner or later. The ideal future education is STEM, you know, science, technology, engineering, and math, and Hickey, as I say in my book, Humanity, Ethics, Covertivity, Information, Imagination, Coming Together. All right, that is kind of the key ticket. Thank you for the question. And another one, please. Don't be shy. I know there's a lot of stuff running here. People are still watching. Really appreciate you being part of this show. We're going to put this up later, the PDF as well, on my webpage, futurewithgirt.com. You'll get the PDF, so you can take a look at it, and of course, we'll put up the video here later in an edited way so you can see it cleaner. Another question, image soul. And if there's a bad future imposed by the big corporations, okay, I don't think it's quite that easy. It's not the bad corporations that make a bad future. I think it's basically the bad decisions that we make, that our governments make, that we make as a society, and the bad corporations are kind of amplifying this. As long as we keep incentivizing bad companies with money, we're going to get a bad reality, or not to say that we have a bad reality, but not a better reality, right? So you buy Facebook stock, you place your ads with Facebook, you're incentivizing a company that is doing certified bad things, in my view. That's why I left just another usual Facebook example. There's no show without hitting on Facebook. And of course, as long as you have the richest company in the world today, right, the Saudi Aramco, the oil company, and people are making lots of money with that. So that is the problem. Now, we have to pull our money out with the pull our trust out, we have to vote for the right people. And when there's a movement, I call this the Gandhi principle, five or 10% of society are asking for different things, and they are asking now, right? Then this will happen. So I think it's optimistic to think that we can collaborate and actually make this work. But as Kevin Kelly says, we shouldn't be optimistic because we have less problems, we have more problems. But because we're more capable of addressing them. And that is just so true. I mean, look at all the things that we can do with great technology. And AI will do its share there, for example, in medical testing, in vaccines, in fighting cancer, in helping with climate change, right? And not everything around that is just a black or white question. Like I said earlier, so it's something that we have to entertain. It's not just yes or no. Okay, another question, please. All right. So we have lots of questions here. Patricia, Mike Lincoln, again, do you see any cooperation dealing with AI better? In a better way? Anything good emerging? Well, okay, I see in principle, the concept of what Apple is doing with privacy, I know Apple is a makes a good target for it, you know, being sort of an an ego system, and being closed and being expensive. But I like what they're doing, focusing on privacy, not harvesting data. So that principle of Apple, you know, Apple is a very strong privacy contender, and also now encrypting everything, which is going to cause a big fight with the FBI. I think that's heading in the right direction. But then again, you know, as many things that you can say that are not so great about Apple. I'm hoping that Microsoft will do the right thing with open AI, and not just infuse all the products and turn them into like time squandering machines, or storytelling machines, or regurgitation device, or noisemakers, right? Or my favorite one is the laziness generator, right? Or the the diligent bullshitter, right? To turn all of that into an engine that does all of that stuff really well, but in the end, doesn't mount anything. So I think we're going to need a lot of public debate about this, and a lot of questions have to be asked. And that's why I like the European Commission to come up with the things like the AI law, the latest in the Digital Services Act, the Digital Agency Act, and so on. Very powerful stuff. It may sometimes be overshooting, and of course, it's bureaucratic. But, you know, I think we're going the right direction of finding a middle path between those things. You know, basically, it should serve to the collective human benefit, not to individual human benefit, not to the billionaires only, not to the industry only. And I think that's the path we're on. And I'm quite hopeful that we're actually going in the right direction here in the next sort of decade until the end of the year. Oh, the decade. Sorry. One more question here, okay? And then we're going to wrap this up and put up the video, okay? So Matan Kaiser will pursue the creative work by Generative AI, play a big role in society. You know, I think we have a lot of pseudo creative already. A lot of websites who are basically offering clickbait, some 30% or so of bots on Twitter. There's a lot of that already. And this is a major concern of mine that because it's so easy now to say, I'm going to write something like an e-book, right? And have the AI write, you know, I'm going to work with the AI. And one day later, I published my e-book. I think a lot of people will realize that this content isn't worthy of their time. Just like you can say, Brian, what's his name? David Berns said the other day, AI can make music, but it cannot make great music. And there is a difference. You know, who doesn't want great things? You know, we want great books, we want great articles, we want great conversations, we want great TV shows, we want great food. Why would we generate more things that are just lousy sort of, you know, blah, coming along, diligent bullshit, right? I mean, basically, there's a place for that for simple routine jobs. And we're going to use it for that. Like if you're a real estate agent, you don't have to write that stuff every time you copy and paste or you generate with a bot, that seems very reasonable. But we should not let it write legislation. We shouldn't let it write political speeches. I think we're going to know the difference. And one thing that I would advocate very strongly for is an indication of this, if this was written by AI or not, like a button, you know, so people know. And I think that's something we're going to see fairly quickly. I mean, you know, I used it to generate a few things, and I put a button there to explain this. But as I was doing it, I realized, you know, the quality of what I'm getting here, yeah, it's just kind of the middle of the road. You know, it's not, it's not really enough for me. So I don't know, we'll see how that plays out, but there'll be substantial confusion, I think, here for quite some time on the issue of policing this. Okay, let's do one more minute. Here, one more question. And then I will send you back home to your bots. So Brenda, how will we keep the economic benefits of AI in white distribution? My biggest worry. Thank you, Brenda. Totally my worry as well. Here's what's going to happen, right? As the CEO of Open AI said, Sam Altman, we're going to be able to do a lot of things for a lot cheaper. That includes healthcare, education, because AI will be involved in the process, making it faster, smarter, easier to do government services, everything, right? But when we get faster and easier, and we make things digital and virtual, there's a benefit that's being generated. And that benefit needs to go to the white population. And not just the ones who are actually creating the benefit or running the platforms. Last year in the US, CEO compensation went up some 40%. It's now 365 times as much as the average worker from the same company. So a CEO makes 365 times as much as the worker in his company in America. And there you can see the polarization of capital. And I think Sam Altman said in a great speech, I watched the other day, I think it was some show in San Francisco, he said there's really only two assets here, right? One is property, land, and the other one is capital and companies. And so if we are to distribute that fairly, we have to talk about new rules. We have to talk about taxes. I would be all for the automation tax. We have to talk about public benefits of this. And this is a very, very loaded policy question. That question is going to pop up everywhere. We can have exclusive growth based on AI and technology for the top 10% which I obviously belong to, and many of you as well. So we would like that. But everybody else gets the short end of the stick by having less work, maybe for the same money. And the sort of digital feudalism has been described in many books and also in my speeches. We can't have that. That creates friction, which creates war, which creates, you know, misery for all of us. So very big challenge. We have to move to a sort of a digital, sustainable capitalism. Hence my whole idea about people, planet purpose and prosperity, which I'm not going to talk about now because it would take another hour or so. So I want to thank you very much for being part of this. It's been a great pleasure. You will see some of the past shows at GERD Talks, G-E-R-D, Talks.com. I'm going to put this up there as well. It's just a microsite. But of course, if you go to my blog, futurewasgerd.com, you see everything there. If you haven't seen my book there, Technology versus Humanity, it's six years old, but it feels like yesterday. I was just reading a chapter where the other day I was thinking like, gee, I should make an update on this. But have a look at my book if you feel like buying a Christmas present. No, that was last year. No more Christmas presents. So thanks very much for tuning in. If you have any ideas as to future GERD Talks, let me know what I should talk about. I love this topic of AI in the future of it, and I hope it was a nice and balanced conversation. I do appreciate all of you hanging around watching and asking all the questions. Thanks very much. Live long and prosper. Stay human. Stay hungry.