 Greetings and namaste, everybody. I'm Gard Leonhard, futurist in Zurich, Switzerland. I was in Delhi, New Delhi, India last week for an amazing event. The Times of India was putting on with the Times Network, the India Digital Fest, and I was invited as keynote speaker along with my good friend Callum Chase, the Singularity science fiction writer. And we had a great time there, and I gave a talk on the chat GPT future scenarios and the future of AI. And it was, I think it was a great talk, but unfortunately the recording wasn't very good. And we had various issues in the live show. And as you can see, it was a super wide format, which was great. We call this the feature show format. So I figured I should record it again. And in a sort of offline green screen version, as you can see right now. So that we can discuss this really important topic also because the future of live institutes open letter that came out just the day after this event, I think was March 29th, a calling for a pause on AI and a reflection about a potential moratorium on AGI, which caused lots and lots of discussions. I made another video on this, I'll link it up here so you can see it. But because of that context, I figured it would be good for me to lay out again why I think we really need to think about what we want with AI, not so much what we can have, but what we want. I position myself, and I think of myself as a futurist, somebody who's keenly interested in technology and all these things, of course. But I'm also a humanist. I talk a lot about what it means to be human. And not that I'm total expert in this, but clearly imagination, mystery, empathy, consciousness, spirituality, those are things that technology cannot do. And there is a very, very big difference between those two things. I think one is the technology viewed of the world, which is Silicon Valley, China, and to some degree now India wants to look in that direction as well. But here's the humanist point of view. What do we do as humans? What makes us human? What is the difference between humans and machines? My view is that machines can handle roughly three, five, maybe 10% of real life at this very moment. And that will, of course, increase. But it's really important to protect what makes us human. And especially in the age of where AI is potentially limitless, generally intelligent. And that is really the key question, my view. The question is not about what's happening right at this moment, but the potential thing that could happen when artificial general intelligence kind of explodes and becomes this re-iteration of itself, which I'll talk about a little bit later. I think it's a key point. Right now we are experiencing the Sputnik moment. That's a moment like Russia putting out the Sputnik satellite where everybody says, oh my god, it's out there. It's possible. And now everybody wants to go to the moon. Bing, Microsoft, of course, Google, Baidu, everybody wants to reach the moon with AI, especially generative AI and large language models, LLMs, which have been around for a long time. But OpenAI bought it out there into the sort of common universe of conversation. And that's both good and bad. I think the key question is when we think about AI in the context, what do we believe in? Do we believe that technology is our ultimate destination? They said, in the end, we'll be merging with technology, a singularity transhumanism. We're merging in a world where AI is basically moving inside of us and vice versa and we can't exist without it. In order to be any good or survive, we need to actually become like AI and integrate it into our own body and our own being. I believe in the opposite. I really believe that it's so important that we protect our humanity there. And in the end, clearly it's about this thing. We really need to think about collaboration on this topic. Looking at the future of live letter, one of the initiatives is to say, well, if we can't agree that we're going to pause and wait and develop standards on this, then we need to collaborate on the moratorium. Comparing that to nuclear weapons, of course, it's a bit of a stretch because nuclear weapons are very, very hard to roll out and to maintain and to actually use and AI will not be. So very big question. How will we collaborate and what kind of global body will we use for this? There are lots of open questions on the governance issue. So I made a film last year about the good future. You can watch it, TheGoodFutureFilm.com. And this film talks a lot about the same issues. How do we define a good future? What does it actually mean? And what is good and what is not good? And who's going to help us decide what a good future is? And I think it's important that we start to think about how do we orchestrate this when it's about general intelligence? How do we deal with automation, the threat to work, the changes to education, our societal glue, everything that keeps us together and locks us together as humans? As Buckminster Fuller, a famous future was one of my idols said, we are to be architects of the future, not its victims. I think if we're going to structure the future to what is best for us, we're going to be much more proactive on artificial intelligence and its impact from the beginning, from IA to AI, all the way to AGI. And we have to think about how we map out our future and how we will ever actually be able to control it. I'm not with many people who say it can't be controlled. I think that is just not true. And that's a very bad idea. Every technology we invent can be and should be controlled to be to the benefit of humanity. Otherwise, what's the point? Why invent something that's going to be larger than our control factor? And for many people, it's this kind of idea of looking at the future, especially now in Europe, roughly 70% of kids, young adults, 20 to 40, they're saying the future will be bad. And how do we create a better sense of the future? How do we rebrand the future as not bad? I think that's one of the key questions, because as we're coming into this morphing of human and machines, we have to think about how we can keep it good for us and how we can use technology as a tool, but not as a purpose. That's one of my key points I made in my speech. We should not use artificial intelligence as a purpose of life or as a religion or as a sort of ideology. It is a tool just like any other technology, and the tool shouldn't become the purpose. And I'm worried about that as we look at current chat GPT, Genitive AI, that becomes a purpose that's dished up by those that have invented it, and the tech companies are going to gain even more power. I think there's something wrong with that inherently, in my view. And this is a tool just like any other tool, just like a hammer, just a very fancy hammer. So we have four exponential technology things that are happening around us, and I talked about this quite a bit at the conference. Information technology, of course, climate and energy technology, this is by far the biggest change in our society. And of course biotechnology, that means synthetic biology, 3D printing, cultured meat, vertical farming, and then now a leap that has occurred just right now in AI technology. All of those four things are coming together. And all of those are going to fundamentally change our society, every business, all of our work. And I think it could be 98% good if we are successful in regulating it and creating new social contracts and having the wisdom of how to deal with this. And also, of course, we should not use those four big technology transformations just to monetize more, to make more money and to generate more profit and growth. It has to go towards a global agenda of people, planet purpose and prosperity, not just a global agenda of making more money for select 10%. It's kind of like social media used to be something that's good for everybody, and now it's good primarily for those that have it, not those that use it, but those that actually run it. And $150 million is made and profit every single day just by Facebook. And where's that money going? Is that going to the creators? Do we have a digital dividend? No, we have new monopolies. Big story, I will not get into further details here. This is the question, a handshake between humans and machines. And how do we make that equal? And how do we orchestrate it? How does the European Commission orchestrate it here in Europe when we think about the AI Act and, of course, all of the Privacy and Digital Markets Act and all these things, balance safety, security, ethics, control. We need to make sure we have control. We maintain trust. We keep the human in the loop. And that is not an illusion, as many people have said, about my support of the Future of Life Institute's proposition of pausing AI. It's not an illusion that we can do this, and it's not like these things aren't important. They are a lot more important than financial growth. And just to argue that we're going to have financial growth, and that is the only thing that we look for, is kind of ridiculous in my view. That's like saying capitalism and profit and growth beats everything else as an objective, which clearly it does not. And we have seen in the striving of the millennials to come and change the world with purpose. I think this is the key word, purpose. What is the purpose of all of these things? The purpose of our lives, of course, is to reach happiness, self-realization, to not die, to live well, to have kids, to have rights. That is the good future. And only part of that has to do with technology and being able to react fast to or work more efficiently and all of these things. And I think this kind of obsession with technological process and process makes us vulnerable to overthinking the possible and being led astray into a rabbit hole in many ways as social media has become. I think it's a very good comparison. In fact, it could be heaven or it could be hell. It could be heaven if we can use it mostly for good things and we can dial down on the bad things and control them. It could be hell if we were just being led into a world where AI is writing our emails, AI is giving our answers, AI is writing our newspaper articles, running on news shows with virtual avatars as news anchors, and on and on and on. I don't think that is a very worthy future. There are quite a few things that I don't want to forget. They're important, but we may also have on the negative side unpredictable bias and errors. And basically, large language models are error machines. They are hallucination boxes. They're super fancy parrots and they could automate the human narrative. Imagine if we can have a technology that describes everything that makes our pictures, it writes our blog posts, all these things. We'd be stuck with something that regurgitates. That doesn't strike me as a very good future. In fact, it would kind of create an alternate reality, a simulation. And it would lead to a sort of laziness effect that skills us and reductionism and application, all the good things I described in my book, Technology vs. Humanity, six years ago. So, on the negative side, all these possibilities and how do we deal with them? Who's responsible? Who do we go to? Who can we rely on? And of course, the flip side is the solid efficiency and productivity gains by offloading commodity work. That's basically the deep proteins, dirty down dangers. If the machines can do that, if we can accelerate basic human knowledge work, that could be amazing. My work is accelerated with JetGPT. I love it. I think it's great. It's not replacing my humanity. It's helping my discovery. That's good purpose and good use. And I think we have to see if we can find a compromise between those two different things. So, if it will be like social media, then we have this effect as research here from the Atlantic Council. By 2023, social media will be a net negative 53% of people think that is probably true. And that's a scary number. So, are we going to follow that directive by saying, well, how will AI be a net positive, not mostly negative? And we can't just say that that's going to be handled by technology itself because it won't. And it hasn't. It needs a framework. It needs a context. It needs social context. It needs wisdom. And how do we get that if we don't steer the conversation about what a good future would entail and how we can use AI to actually deliver that to us? And just to define AI, I think this is also an important step here. So, I'll be the CEO of DeepMind, now Google, owned by Google, says it's computer systems that turn information and data into knowledge. And that is a great definition. It should scare us because knowledge is kind of what we are all about. But we're not talking about human knowledge which entails social knowledge, intellectual, kinesthetic knowledge, and all the different types of consciousness and spiritual knowledge and all that stuff that makes us human. Machines don't have that. It's basically about logical knowledge. Being able to drive a car if no other car is there, looking at our analytical data from healthcare. That's the binary knowledge that machines have. And these are primarily about these things, deep learning and machine learning. And that is, of course, everything behind the large language model and GPT, the Generative Pre-Train Transformer, that was actually invented to the transformer by Google and then open source. I think they're utterly insufficient to actually become a big deal in terms of providing true and honest answers. They're basically a little bit better than a surge engine, different than a surge engine. But we may get really confused using them. I think this is the real danger that we think of them as too much like human abilities. So this model manipulates and generates language, creates synthetic human-like texts and images. Very powerful. I've used it a lot. I think it's really interesting. It does help me to rewrite my book. I'll have some examples of that a little bit later. OpenAI has a great trailer on YouTube. I'm not going to play the whole thing here because I've probably seen it. GPT-4 is a breakthrough in problem-solving capabilities. For example, you can ask it how you would... Yeah, this is, of course, a daily occurrence having to clean up my tank with the Peronius. But it's kind of tongue-in-cheek from those guys, I suppose. But you can find that trailer on the OpenAI channel on YouTube. It's interesting to watch because in the end, really what's happening is that these machines are kind of bots that do autocompletes and they do it in such a way that we don't think of them as bots anymore. And they're creating something that looks and sounds very human. And that is confusing to us because we think of them as humans. Marquis Brownlee, what does a quick brown fox do? The quick brown fox jumps over the lazy dog, the obvious answer. That's what it does. It does it very good and fast, but it has no meaning, context, consciousness, agency associated with that whatsoever. That still doesn't mean it's useless. I think it could be amazing looking for medical information if I had a good database to work off. It could be really, really, really helpful. I think the joke about the stochastic parrot, it's getting old, but it's still very valuable. It's not a cutie bird parrot, but everything else is kind of the same. Well, that would be a great simplification. But still, I've looked at it and I said, okay, let me have a look and see what it means for me as I'm trying to talk to the Indian audience. So I looked up a definition. What's so great about the Digital India Initiative? And you can see it scroll here. It's probably too small for you to read. So I'll skip that part. But it gave me an obvious answer. The other part I asked, what is the role of AI in the future of India? That was really quite thoughtful. Healthcare. It talks about agriculture, AI-driven supply chain solutions, infrastructure, urban planning, education, which I don't agree with, but manufacturing, job creation, and skill development. And then of course gave me a great answer. I am the right guy to speak to Indians about this topic. And it was very conclusive. I love that. And I love in general that it kind of fans me a little bit, makes me look great, compare the two of us here. But anyway, so I use it for translation like this or translating examples very quickly. It does a really good job at that as long as it's not very long. I use it for other things. And this is a good one. So this whole clip is made by AI or at the message open AI, the character design by mid journey, the animate by DJ D and the voice by 11 lapsio. It's really kind of interesting that we can do that today. This for me is my best application called pseudo that I use a lot. I'm trying to rewrite my book technology versus humanity, rephrasing, making a shorter, making it more descriptive. And this engine does it really well. I just paste the text and gives me suggestions to fix up my Germanized English and make it more snappy or more interesting. It's a great tool, but it's not a purpose. I still have to find the purpose of doing this myself. But yeah, it's a great tool. I love using it. It also creates things like this runway thing, which makes me look like a bot, which that really I appreciate. And of course, this, so these Disney animators are quite awestruck with this conversation. Yeah, voice control is coming. We're going to speak to the air. Right now we're typing, but imagine if we speak to it, then we are truly at the science fiction science fact level where an AI is capable of completely answering every phone call, so to speak, for us or with us. Imagine what will happen when three or four billion people are able to speak and get a spoken answer and their favorite voice is scary and worrisome because we may be on our way to create artificial humans humanoid. This is Amica. That's interesting because imagine that chat GPT functionality and that combined. This is Samsung Neon, the artificial human by Samsung Neon. They actually called it that. That is supposed to be used in shopping malls and stuff. We're good luck with that. If that's going to be our future, then I'm not entirely sure that this is really what we want to see. We can't just argue that AI is inevitable because we make lots of money and there's a lot of powerful companies. I think that's just not a good idea. If we give into that as the primary rule, then we follow the rule of the biggest power, not the biggest need for humanity. We have to have a more realistic view and maybe it's idealistic. I would grant you that saying that humans have the ability to actually make that happen. Yes, I think we do and read a book like humankind, how humans are capable of doing that and how we've always been capable and we've proven that. So the view that humans are not capable of anything good apart from making more money or grabbing more power, I think it's faulty. I wouldn't believe in that. I'm ready for a discussion for those that do believe it. So as we go into that future, Sam Altman had a great quote that I used here in the show in Delhi and basically says, if an honorable ability to think, create, understand and reason is becoming possible for machines. I don't think that's really entirely true. I think that is a vision that we may have. It's the AI revolution. Yes, we may create enough wealth for everyone to have what they need. Well, that is typical Silicon Valley thinking in my view. Technology doesn't do that. It makes better tools and if you have more power, you can use the tools better. But if we want that to be the case for everyone, we're going to need a little bit of policy. If we as a society manage it responsibly, there was this last sentence in his press release on this topic. I think this is the key. We're not going to manage it responsibly if it's about money, if it's about profit and growth and power, whether it's China versus Europe versus India versus the US. If it's about to be militarized into an arms race, that is not responsible. And that is what the open letter from the Future of Life Institute is asking for. That's why I think it's so important. And I really think we need to get behind this on the same agenda, start a global discussion. And I really would love to see an event that focuses on this debate of how we're going to solve this dilemma purely driven by money, profit and growth and greed and growth and all of those good things that we need some of those, of course, or driven by people, planet purpose and prosperity, which is of course also a growth agenda, but a slightly different one. So this machine that can actually picture reality, that can understand that then they can become thinking, creating, understanding and reasoning and understand reality. I think that is a scary thought. I don't think that a generally intelligent machine would look very kindly on us. It's kind of like I walk into the forest here and I step in a few ends. I don't notice I'm killing, but that's how I feel about them. And maybe that's how AI or AGI would think about us. That makes me kind of shudder there for a moment. So I think it's really important that we think about what kind of lens will we have when we see the world. Will that be the lens of a machine? The machine telling us how to understand the world? I think that's also a very bad idea. Machine creating most text images videos and people, as some people have said forth, not a good idea, because who will be in charge of this machine and who will be accountable? Is it Microsoft? Is it Google? Is it Baidu? Is it OpenAI? Well, yes, they're trying, but they're accountable to the shareholders, the stock market. I think that's not a good objective. We need to have something larger than that. If most content is synthetic, if you didn't know it was me and I would just have an AI do my talk, how would he still like me? How will he still know reality? And that is my biggest concern. We won't know reality because we have all the synthetic stuff there and most of it is fake or it's made up or it's inaccurate or it's put together with a backdoor purpose. That is something I deeply worry about when it's about democracy. Also, of course, I made the point in India and at the same point here in Europe, most of that content currently in the database in the, what is it, 580 billion parameters of chat GPT-4, GPT-4, that is US content. It certainly is an Indian content and it isn't in any of the 386 languages of India or for that matter in any of the languages in Europe in a deeper way. Most of that content is limited to the background of the internet, which to a large degree, especially in China, of course, has been US based or English content, but it's enlarging, of course. And it will learn those things, but it's also something to think about. All the bias and all the information about that stuff is just in there unfiltered. I always say the way that machines see the world kind of ignores 90% of real life. It sees the world through data, through data feed, a binary feed, zeros and ones. Humans are not like this. Humans are all sensing. We use in all of our conversations, whether it's here online or there in meat space in real life, we see with our eyes, with our skin, with our heart, with our ears, we are all sensing. And that is 100% of reality. So that is something I kind of am concerned with. How AI will actually do that in the future is to create the scenario of where we can be all sensing. I think it's doubtful that this is a good idea. A holistic view would entail things that AI knows nothing about. Our spirituality, our thinking, our emotions, our feelings to each other, all those things that are part of real world that machines know nothing about. I brought the example. I think this is a great trailer here. You can check it out on YouTube because I can't play the actual song. It's of course, Start Me Up from the Rolling Stones. Most amazing. Mick Jagger is just amazing. And the comparison to what is actually interesting here is not the machine. It is Mick Jagger. It is the humanity of what he does. The machine is just one of those things. And learning is not just memorizing. If we deduct from this as a bottom line, learning is not memorizing. It's not downloading data. Intelligence is not data processing only. Humans don't think with just the brain. We think with the body. Speaking is not the same as thinking. And large language models are about speaking and about words. And that's not all just the reality. Humans aren't binary. We don't think in zeroes and ones. And yes, no, yes, no. If this and that were a lot more complicated, real life transcends data. Logic alone is not enough. That is just so crucial to understand that logic alone will not get us there. It will just get us a part of the way. And logic alone is also very dangerous because it can lead us to places where logic rules everything. And it has been said many times that knowledge without wisdom is like water in the sand. I think it's a Guinean proverb. It's really important because I think also wisdom without spirituality is a dangerous thing. So these things altogether human life is much more complex than what these machines make us believe. And I spoke about this quite a bit in the conference and all the conversations around the conference and daily at the India Digital Fest. You know, how we compare to machines. You were infinitely more than this kind of super brain that may memorize Wikipedia in two minutes and memorize all of data. But the world isn't just data. It's social, intellectual, kinesthetic, intelligence, about eight different intelligences according to Gardner. And Indian philosophy, of course, is a big part of this. I'll get back to this in a second. Arthur C. Clark said, let me remind you that information is not knowledge and knowledge is not wisdom. Each grows out of each other and we need them all. Back to the Indian philosophy, the four aspects of minds of the yogi philosophy, which was very interesting to witness that uptake on this in India when I was speaking. The Manas, the Chitta, which is a storehouse of memory. That's kind of like the machine, right? The Ahamkara, the cells of cells, the booty. All of these things that I don't know very much about, but I'm reading about it right now, are a big part of that intelligence. So at this point I brought in a special guest and somebody I haven't really taken note of many times before, but I was told by many people it's good to bring in Sadhguru, who is very much revered in India and around the world for a common on this. So many of you will be out of your occasion unless you do something that a damn machine cannot do. All of you should gear yourself for this now. You must be able to do something beyond your intellect. Basically he talks about the same thing that routine is something that machines can do. Man, machines can do this job of bringing in routine information, whether it's about agriculture or financial advice or filing clerks or any of those things. And routine jobs like building a house with a 3D printer. Machines can definitely do that. Machines can think about how that could all come together very easily. Machines can drive. This is the Zook here. We are becoming a Zooks with an X. In San Francisco the street is blocked off with that. But anyway, automation of unloading a truck. This is Amazon. And here, of course, this kind of shows the website being programmed with chat GPT. I think this is a demo again from Open AI here. This is one of the Open AI guys showing how we can build a webpage using this app. It's my joke website. It's not great website, but it's programming, right? So bottom line of this, attention back to said guru, if you work like a robot, a robot will take your job. And that is kind of what we're looking at. And in large economies like India, lots of people work in routine commodity jobs. 20 million people around the call center economy. 90% of that can will be automated. We got to think about that. And what does it mean for people when that happens and which way do we go? And then, of course, if you learn like a robot, you will never have a job. All you end up working for the robot. So here, we see clearly what's happening here with all the details on this list, data entry clerks, customer representatives, bank tellers, retail cashiers, translators, all these jobs on the list. And the numbers are pretty astounding. Machines and robots will be everywhere in the future. We have to learn what that means and we have to prepare for this. The future isn't about prediction, as I like to say, it's about being better prepared. And so I often use this kind of Maslow pyramid to where it's quite easy to be seen what's happening right now is that machines are learning the lower part of this pyramid, data information, logic, intellectual knowledge. Machines can do that and they're developing it by 2030. That'll be in full swing. We have to move up this pyramid towards deeper knowledge, tacit knowledge, understanding, wisdom, purpose. The human only thing. That's our job. That's where we're going to go. And this chart here on the side from Statista shows quite clearly how India is on top of that list, but China even higher on the list, followed by the U.S. and Japan, Mexico and Germany, we're going to have to really figure out how to create a new job economy based on the fact that routines are being automated. All these things that we see here, human agency consciousness, imagination, intuition, that is really what we need to distinguish ourselves from machines and that what we have to study, not just STEM, science, technology, engineering, mathematics, but also hacking, humanity, ethics, creativity, imagination. That's how our education has to change to actually bring that into fruition and make sure that we can have a good future all together in this economy. Technology doesn't have ethics and that's been clear for a long time, but it's very clear now. And if technology doesn't have ethics and AI obviously shouldn't have ethics in the sense of a human having ethics and values and meaning and understanding, then how do we govern it? How do we make sure that it's being used ethically? That is a huge global question. I think Europe and India and Brazil would be very poised to lead that conversation because it's not about technological leadership. This is about framework leadership and Europe has great initiatives on AI on the digital markets which protect humanity. And I'm not of the opinion that we should stop protecting humanity so we can become machines. I think that's a ridiculous idea. I think quoting a famous Supreme Court judge here, ethics is known the difference between what you have a right to do and what is the right thing to do. Of course, I shouldn't be the one saying what is the right thing to do. That's not going to be up to one person, but on a bottom line level, I think we can say what is the right thing to do with technology. If it doesn't result in human flourishing, we should think about it again. We should make sure it does. It's not just an economic variant that we think about. And we think about AI or AGI. Clearly, if we don't think about ethics and values, this may be the last thing we'll ever do. So, I think I've pledged this many times. We need a Humanity Futures Council. I used to call it the Digital Ethics Council, and I'm going to bring this out more clearly in another presentation, but we need to think about this. A global council of people who understand what the issues are and collaborate to provide recommendations. Because now we're moving into this future and into a future where we're all of a sudden realizing that this idea of bringing it all together could be amazing or it could be really terrible. Assisted intelligence, automation, augmented intelligence, and then a kind of superintelligence, we're going to need to collaborate on this on a global level. And I think it's not an illusion that we can. We should make an effort and not say, well, it's all too late. It's not too late, and it certainly isn't too late for us to control technology. We must use the incredible power of technology for the collective holistic and inclusive benefit of society, and minimize its potential harms. That is a dual job of government and of business. We can't just ignore the potential harms like we did in the oil industry. We end up with an AI arms race. Nothing that we would ever want, because that's probably one that we can't survive unless we collaborate and figure out how we can create mutual benefits. We may end up in a world like this, like when it was social media that went from magic to manic and obsessed to toxic poisoning our society and making our society dysfunctional, ripping us apart, disconnecting us, creating canyons of understanding between each other. So I said in the event that as a digital leader, India must look beyond short-term business gains, not just growth and power, when the foundations of society are falling apart, that all that stuff doesn't matter. And this is also why I believe it's probably true for most countries, that we also must look beyond this old definition of capitalism, communism, socialism. Those are all useless. The only question is, is our policy and our economic understanding fit for the future, for the good future of humanity? That is the ultimate question, not which camp we used to belong in. So I think we should have a technocratic oath of technology companies that provide a pledge. I hear by pledge to place humanity over technology in every decision. That's something we need from the tech companies and from every company associated with technology, the technocratic oath. I wrapped up the show in India, embraced technology, but don't become it. So I hope this was a good idea to clarify what I said back in Delhi. And I hope you enjoy the show. There are some technical issues with doing a widescreen thing like this. But anyway, give me some feedback and have a look at my book and check me out on YouTube. Thanks very much for tuning in.