 So maybe I will just talk until we figure this out. So here's the key point of technology versus humanity and artificial intelligence. If you're looking in the newspapers today, every day, you see headlines about how machines will replace people, lawyers, doctors, drivers, cooks, call centers, and in some ways I think it's true that computers are learning routines. So I'd like to say that basically anything that can be looked at as a routine, a computer can learn. Because here's the big thing, I think, if you are in technology you probably know, is what we're seeing around us is that basically machines are starting to learn. It's a very big difference to machines that are being programmed. So in this world when we're looking at a future that is going to be quite, oh, that's fantastic. I think I recognize myself there. So let's start here. I'll pick up on the thought about the machines in a second. This is my job I observe. I don't predict. Predicting the future is very difficult. There were some really great people, Alvin Toffler, Arthur C. Clark that could do this. But the future is so fast now that the speed is mind-boggling. I mean no place like India where you can see the speed going like this, just mind-boggling. And so that's my job and sometimes I like to say quoting Einstein, imagination is more important than knowledge in this future. Because knowledge is something that we need to have, of course, but now machines are getting to have knowledge, knowledge in terms of data. So IBM Watson can read 1.2 million books a minute, 1.2 million books. IBM Watson reads 4,250 new oncology cancer reports every week that are coming out every week. There is not a single doctor that is a cancer treatment that has time to read one research report. Does that make IBM Watson a doctor? Hardly. It makes him have information. If you stake your future on information, you're in deep trouble. Information is good to have, but in five years, maybe seven or maybe eight years, we can speak to any device and it will give us any information. The other day I was at the office of a big AI company and I asked her, the computer, it's always a woman for some reason, I asked her to say what is the future of Europe? And the machine gave me a 10 minute talk, like my own talks, okay, my job is over. The machine can answer the question, what is the future of Europe? But then I made a test and said I'm going to ask the machine for a concept, which I'm working on in my next book, it's called the United States of Europe, concept. You know what the machine said? Command not understood. Let's show you what machines are doing. It's fantastic if we can use machines that have knowledge. We shouldn't be afraid of that. We should be afraid if all we do is to use old knowledge and package it. That is not going to be a job for the future. Machines can do that, not yet, but five or seven years. In ten years we're going to have a computer that has a capacity of an IQ of 200,000. How are your children going to compete with a machine that has an IQ of 2 million? In 2050 we'll have the first machine that has the capacity of all human brains, all human brains. That was about roughly 10 billion then. So it's really about imagination, that's something that we have to have. And it's quite clear that we're living in a world of total disruption. I mean I was in the media business, I used to be a musician, producer, music business disrupted, media business disrupted, advertising disrupted, telecom disrupted, just big news this morning about Tata and merging with Air Telephone. So disruption is everywhere and basically what this means is that the future is no longer a time set, it's a mindset, it's not a time frame. The future is not about tomorrow, it's here. The biggest challenges of the future is already here, we just haven't noticed. We haven't had time. If you look too much in what is today or in the rear view mirror, you forget that the future is coming blindingly fast, I'll show you some examples on this. So basically the world and of course particularly India, and this is true for women as well, I couldn't find an image with a woman, but we are essentially going into a digital tunnel. Everything around us is becoming connected. Our money, especially in India, digital money, healthcare, media, government, everything in the cloud. India has the second largest number of STEM graduates, science, technology, engineering and math, and of course India is pretty well known for this. Is that the future? There's nothing wrong with having a STEM degree, that's a very good thing, but the future really holds this, is that machines will acquire the capability of understanding technology. In fact, machines will program themselves. If you are an HTML programmer, or a Java programmer, or if you make apps, in just a few years apps will make themselves. They already do, it just takes too long, it's not good enough. So what does that mean for our future? What do we really need to be successful? What is it that machines can't do? Or are we just also a machine? These people are arguing that, mostly Silicon Valley, for good reasons. The question. So I think basically what we have to understand that the future is no longer just an extension of the present. It's very important to understand every single business, every single company and country and continents even, to think about what we did in the past to be successful was a great thing and that's good, but it's very unlikely that we'll do the same for the future. The German car industry, I work a lot with the German, I live in Switzerland, but I work a lot with the German car companies. The German car companies are facing this right now, the future is not selling cars, the future is selling mobility and it's a big difference, it's a painful change. If you're not prepared for the future to be different than the present, you will find yourself in a position where what you do is no longer needed sometimes. If you're lucky it takes longer so you can prepare. By the way, I will share my slides later on my website, futurevisgirt.com, that's GERGERD, like gastrointestinal reflux disease, same thing, futurevisgirt.com. Look at this chart, how many people feel threatened by the march of artificial intelligence and intelligent machines? India is number three, 61% of Indians responded that they feel worried about AI taking their jobs and the Chinese of course rightly so, you know, I'll tell you in a second, so you know what happened to horses, right? Horses used to be in Europe the prime mode of transportation and when cars came along they did away with the horses and then you have horses now but nobody uses a horse to ride to work, right? So is that gonna happen to us? Are we humans the horses of the digital age? We are nice to have but not needed. I don't think that's gonna happen to us. There's lots of arguments about this. I don't think we have to worry about it on that level, we have to worry about the jobs that we used to do that were robot kind of jobs, tasks, they will be taken by a machine. If you work like a robot the machine will know how to do it and that's interesting to notice you know there are things, for example, when you fix a plumbing or the pool, it's very hard to do for the robot, that's routine but hard to do but when you make a hamburger in Burger King on McDonald's the robot can do that and it and they will, it's already happening. So the question is in this future will this be our future work scenario, right? Working right next to the robots. Economist says very hard to see on the screen but telemarketers accountants, real-tale people, technical writers up to 90% likely of automation. Economist also says that the future is in non-routine work. So basically non-routine cognitive and non-routine manual. The biggest enemy of your future job and the jobs of your children is routine, because you know when computers get smart they can observe our routines and we can teach them and eventually they will no longer be stupid. I mean machines are pretty stupid still, they're getting smart but in five years machines will be so intelligent and so you know I wouldn't say human by any stretch of imagination but you know smart machines will take our routines and jobs, task sorry but not our occupations, our chosen work and our purpose. We have to notice this because the other thing is when people think about the future you know 70% of the new jobs in 10 years don't even exist today. How will your children know how to take these jobs while you have to teach them how to invent the job, not to take a job. And that becomes important for all of us if you're not, I mean if you're 65 it doesn't matter. You will see this, you will see that future where your work is changing rapidly. So that is basically outlining where things are going. There's two things that matter in this world today. That's technology and algorithms and then what I call in the book I call this the andro rhythms, you know the human things. And we now we spend trillions of dollars on technology. Technology is the leading force of society, not oil, not banking, not the military, not religion, it's technology. In fact you could say technology is the new religion. I'm not saying this badly it was just an analysis but what really matters to us as humans is not technology. It's relationships, trust, emotions, understanding, you know things that basically come out of being human. This is the the fight about the the Arthar courage right, the privacy issue that I've been tracking is very interesting. It's exactly this problem is on the one hand we love the algorithms and the technology makes our life easier and on the other hand we've already about the human things, you know, how to keep secret, how to keep private, how to keep disconnected. And that's a balance that we have to find because clearly in this world you know about Moore's law, Metcalfe's law, this is the curve that matters most today, the exponential curve. And it matters mostly because we are the takeoff point of this curve. We're not in the beginning, we're not in the end. When I first started doing internet stuff you know I used to be a musician and producer and then I went on the internet I started a company like Spotify, stupidly, in 1999. And after we lost 20 million dollars we found out we're too early on the scale. It wasn't ready. But today we're at the takeoff point, we're at the point where basically everything that we have dreamt of in the past science fiction is becoming science fact. Thinking machines, autonomous cars, automatic language translation, graphic user interfaces that are expanding like crazy, cloud computing, genetic engineering. We're truly at the pivot point. That's a very exciting time to live. But think about this for a second. If we're at four, 18 months, probably more like 12 months, not computer chips but everything else, computer chips kind of ending on the more scale. We'll be at eight, 16, 32, so roughly five years will be 128, that's 30 times as far. We go 30 times up the scale, one billion. 30 times up the scale, you know one billion times as far as today. That's roughly 40 years. That's the kids of your kids. We'll live in a world that's one billion times as technology empowered as we are today. It's hard to imagine. The kids of my kids will not know how to drive a car. They don't know what a CD looks like. They may not know what a book looks like. They may be always connected to the internet. I mean basically we're moving into a world that's hard to imagine, especially in India when you think about the infrastructure issues, the exponential scale is mind-boggling. The speed of progress, I mean it's absolutely insane. So hyperconductivity, the internet of things, smart everything, that's the key word. Smart cities, smart farming, smart government maybe? There is a chance for that too, right? Smart education, quantum computing, that's the next big thing. Intelligent machines, machine learning, finally a very big point at the end of oil. I mean if you invest in the oil economy today, you have to think again. I mean as science says, quite clearly 20 years from now, we can cover 100% of the world's energy need from renewable energy, primarily solar. And that is the exponential scale. We've talked about this for what, I don't know, 50 years, 100 years. We will have to pay the price for global warming, as we can see everywhere now, in the next 20 years. But the future is pretty bright on this. So part of that is this what I call the smart converting. Everything is becoming smart. You stick in the old business, make it smart, out comes a new business. McKinsey says 52 trillion dollar opportunity. I mean mind-boggling, but the thing is we don't want to make everything smart and no longer be human. Because when everything is smart, then it could be because we're not so smart. We can't keep up. What happens to us? What happens to us when medicine is smart? We can have remote diagnosis. Do we never see the doctor? Do we lose human relationships? This was just announced a couple of weeks ago. The bottom line is really this, business as usually is dead or dying. I mean you're lucky if your business is like it was five years ago or 10 years ago. Congratulations, but it's quite unlikely it'll be the same in five or 10 years. Very unlikely. I mean basically not a chance. Unless you're totally protected, you know, or or do it in such a way where it doesn't change because of technology. I call this gradually then suddenly. I've worked in so many industries where we sit around for 10 years and then we look like banking, right? Didn't change, you know, regulation, protection, habits. All of a sudden in one go, now the blockchain is coming in there, right? It's either very slowly, not much, or just takes off like a hockey stick. What you don't want to miss is the point where it changes, the pivot point, or the changing point. And for that it's quite clear data is the new oil and AI is the new electricity. There's no way that we can deal with the data that we already have and it's basically now doubling every eight months on the internet. The amount of data is mind-boggling and sensor networks, internet of things. If we don't have our intelligent assistants, IA or AI, then we can't deal with the data. We can't humanly deal with the data. So that is really going to change our world in a fast pace and say, okay, we've thought about this for 15 years, you know, going inside of our brains and finding information. It's funny, Google used to say that Google wants to archive the world's information, make it available, right? You know what Google is today, right? They search us, not the world's information. We are the world's information. I'm not saying this badly, I think this is a fact. Look at this quote from Larry Page from October 2000, where he talks about the artificial intelligence is the ultimate version of Google. That's pretty astounding, right? In 2000. That's 17 years ago. We can get incrementally closer to that and today Google is saying we want to build this, right? The global brain. In fact, Google has a brain project, right? Well, there is already several brain projects. Today, Sundar, the CEO of Google says we will move from mobile first to an AI-first world. It's the biggest technology company, the biggest platform, the most powerful company, right? I think we're at number two after Apple now in the world, right? So if you're working on mobile, great. Don't stop that. But think of the next iteration. Look at the world, what's happening here, right? The most powerful companies are the companies who are doing this. They're not the other companies, the banks. They're Apple, Google, Alphabet, Microsoft, Amazon. And here's a list of the leading companies in the world from Mary Meeker's slide deck from 2017. You probably know Mary Meeker's thing. She does it once a year, 350 slides on the future of the internet. You should download this Meeker with two E's. And basically what's happening is the most powerful companies are Chinese or American. And there's lots of Indians in those companies, obviously, as we know. But where is India? That needs to be fixed. And of course that goes for Europe as well. We're having trouble getting a big market together right now as Catalonia is trying to become their own state, right? But what is going to happen to regulation, right? These companies are like the oil companies. They're much more powerful. Many of them are my clients. They will have to find a way to create a self-regulation, a way of balance, counterbalances. Google was fined 2.4 billion euros by the European Commission for abusing market position. Is that justified or not? I'm not so sure. It's an interesting debate. But we will see unprecedented opportunities. I'm going to zoom in a little bit so you can see them better. It's basically 100 or so segments where the world is changing because of technology. Now before we get all excited about making $100 trillion, which is a distinct possibility, we have to think about responsibility and wisdom. We should not build things that don't really help people. And we need to figure out if technology is going to be everywhere. How do we spread it? How do we make it less unequal? How do we give access to those that can't afford it? I mean, if every European city is going to be a smart city, that's fantastic. But what about all the small cities in India? Can they afford that? Should the German companies license their technology? Yeah, they should. Despite the quality of technology. One thing is certain that humanity will change more in the next 20 years than the previous 300 years. Humanity, sorry. I'm starting to confuse the two myself. But we're going into the future where we're connecting always. Technology is like breathing. I mean, it's hard to imagine here because so many times the internet is not working so well when you're out in the country trying to get to 4G or so. But that's being fixed clearly. It's a huge challenge. But when you see this with augmented reality, virtual reality, cloud computing, we have to wonder where this is going because we are getting into new relationships. We can talk to our machines, Siri, Cortana. Robots will be absolutely everywhere. The price of a good robot has gone from half a million to roughly $20,000. Baxter. Warehouses are being revolutionized. Cars are driving themselves. Robots are helping us with our daily lives. Virtual reality is helping us to see the universe like Tom Cruise and Minority Report. Just to go inside. And maybe we can end death. I mean, it's interesting to analyze about 50 companies in Silicon Valley. Their destination is to say that they're going to bring about the end of dying. Now, that is a strong order. Is that a good idea? I'm not sure. I'd like to live to 150, but I know 550. I'm not so sure. But that is a big question. What will happen when we do this? When these mega shifts go into effect, I call them the mega shifts in my book. There's a small website I use called Megaships.com. You can read more about it. But basically, the mega shifts are impacting everything around us. Digitization, cocknification, robotization. Here are the top four. Data. What I call datafication. Everything is becoming data. Make no mistake about this. Whoever owns or has access to the most data with the largest machine, the biggest intelligence will rule the place. I mean, that's what we see now. That is clearly the future. The other one is machines that can think they can do this. The Internet of Things and the last big one is human genome editing. These things have enormous benefits. We can solve most of our problems with this. Global warming, energy, food, medical, but they also can be used as weapons. They can be used for evil things, like all technologies, obviously. So we have to keep in mind that we need some sort of supervision. When we built the Internet of Things, connecting everything, we can save up to 40% energy. We can create new business models. That's happening in all the ports in India, becoming smart ports. We're creating a new meta-intelligence. Meta-intelligence is the intelligence on top of the intelligence. That is creating really, really powerful benefits. But we do have to wonder, what's going to happen to this? Who is in control? Who decides what the data does? Who is accountable? Are the companies that make the Internet of Things, are they accountable? Or do they want to be like the gun makers? The gun makers are saying, we're not responsible for death. People kill people, not guns. That is the cheapest excuse you can possibly think of. You are responsible for what you are inventing. Facebook has invented the largest artificial intelligence machine that governs media. Are they responsible for what people do on Facebook? Absolutely. Are they fully responsible? Of course, people are also responsible. It's not black or white, but they should stop saying that they're not responsible, that they don't do media. It's clearly going to be a huge issue. Look at how much money is going to artificial intelligence. This is where everybody is looking to get rich. I would guarantee it's quite likely that, if you're a wise investor, look at this, artificial intelligence in the enterprise. Again, we're at 2017. We're at zero, almost. I mean, I can see dollar signs emerging from this. $30 trillion of revenues for this. Putin says, Russia, he said that artificial intelligence, whoever leads in AI, will lead the world. Now, that's not an unlikely quote from Putin. But China, of course, says the same thing. They want to lead in artificial intelligence. And then China says they also want to lead in human genome editing, being able to defeat cancer, diabetes. Now, that's quite a bit away, much more than AI. But we don't want an arms race here. We have a nuclear arms race, and it's so far a little bit, I would say, not solved, but we manage it, right? There has not been an incident since Hiroshima. We don't need an arms race for artificial intelligence. We need to agree on what that does for us. We need to think about this. I think if you work in a tech business, you would hardly agree. Technology has no ethics. And how would it? I mean, if I tell an AI to make paper clips as many as it can, and then I give it all the authority to access, it'll make paper clips out of us. That's just the job. It's zeros and ones. We cannot expect machines to understand our concerns, our values, our ethics, our goals. What is the goal of human life? A simple question. In a nutshell, happiness. The Buddhists say contentment, but let's stick with happiness. If that's the goal, how do we get to be happy? I mean, clearly ethics is the difference between what you have a right and the power to do, and what is the right thing to do. And here's the challenge for you guys. We're going to have unlimited power based on technology. We're 10 years away from that, and India is a major driver there. I mean, India has also billions of dollars in AI. So technology will enable us to do pretty much anything. It's hard to believe, because much of it doesn't really work so well today. But in 10 years we'll be at the point where we have to decide what do we want. And then we have this power shift, I'm sure you're aware of. In 2014, U.S. and China are the biggest economies. Look at this chart. In 2050, India has moved from spot 9 to 3 in the global GDP economy. China, U.S., I'm much more pessimistic on the U.S. I think it's going to move further down with the current really smart leadership that we have in the U.S., destroying everything that's due to man. That's quite unlikely. But India and Indonesia, that's where I put my money. China, that's a difficult question. Now we're moving into a world that's going to be comprised of what's called the E7, not the G7. The emerging countries, India, Russia, Brazil, Indonesia, and technology as the driving force. Clearly that's not new to you, but we should think about what's happening with technology. We should not just use technology to get efficient. Business process management, outsourcing, efficiency. That's a worthy goal, but efficiency is really a robotic idea. It optimizes things. We have to be clear on this. Sustainable success will not just be about efficiency and productivity and optimization and GDP. We have to think of that as a taken holistic view. What is the future of humanity when we use technology? What do we want it to be? Not what we can do. We can do anything. We're literally a decade away from literally doing anything we want. The key question will be this one. Who will be mission control for humanity? You know who currently is mission control for humanity, right? You know where that is. Epicenter, Palo Alto, California. Silicon Valley is mission control because they are the fastest, the quickest, the most courageous and of course many Indians are helping Silicon Valley do the same thing, right? So that's where mission control is located. The question is do we need something like this, right? Do we need a global organization that deals with what are called digital ethics, you know, the future of what we should be doing and why? And this question is, you know, the privacy debate in India is just one piece of that question. If you look at the chart, impossible to see here unfortunately, but I will disseminate it later. The slide basically says that on the right hand you have benefits and on the left you have dangers and if you're looking at these three things, the most powerful things with the most benefits and the ones with the most dangers are the same thing, right? AI, sensors and biotechnology. So here's a simple question I have for you. Do you agree that we have an ethical imperative to turn this into a collective good or is it just to turn this into a stock market success? That is a key question. If we do, if it's just about making more money, we can easily do that, right? But will it result in what we want? Because we're going into a world that is quickly in my lifetime going towards what is called the singularity, right? The singularity is a point in time where machines have the same capability and beyond than a human. That's only roughly 10 years away. The question is no longer going to be if we can do something or what it costs, but why and who controls it? We're going from feasibility and efficiency to reason. So if you have started engineering or your kids are going to engineering school, it will not be about whether we can do stuff we can, right? We will have the money, we will have the technology. The question is why are we doing this and what is the purpose? And I think this is a human decision that we have to think that all great technology should result in human flourishing. I mean, that's the purpose of technology. Technology should not result in its own flourishing. This is what happened to Facebook, right? Facebook has turned from being something for friends into a giant engine, you know, a pleasure trap, a manipulation engine, and we're still using it because we can't do anything about it. So we're going to a future that is going to have to be based on the new economic paradigm, which I don't have time to spend on, but basically people, planet profit, it's been talked about for a long time. Technology makes it possible. ETA, 20 years. A new logic as to how we define what is good. Already the tech companies are getting on board with this. There's a partnership on AI, DeepMind Google just launched a new ethics division. They're thinking about the purpose of what they're doing. And driven by this realization that computing all of a sudden is essentially a cognitive process. So now computers can hear us. They can see us. They can talk to us. And currently that's not working entirely well yet. But you can see what this is going two years from now. We're going to be able to talk to machines as if it was just a friend. In fact, you know, I've just the other day, I saw a demo of a really great voice control system. I could speak to it in mixed languages. A little bit of German, a little bit of English. Worked just fine. And then it was able to speak back to me in my voice. Now that was scary, I tell you. I made the same jokes too. I mean, mind boggling this. Speak into machines like speaking to a friend. Here's Amazon Echo. Alexa, dim the lights. Alexa, play happy birthday. 14 million Americans have bought this. Amazon Alexa Echo and Google Home. And very soon we're going to sit down and say to this box, Alexa, vote for me. Or we'll say, I'm looking for a new partner. Just find somebody. I mean, we'll basically do anything for us. So AI is really about this. Computer systems that can learn how to behave a little bit like a human. But here's the reality. We don't even know how humans learn and how we feel. We know very little about this actually. We know increasingly more. But the bottom line is what we see in AI today is not like this. You've seen the movie X Machina. Hollywood is really great at increasing the fear of technology. That's their main business. As much as I love movies, we have to understand this is entertainment. It's based on fear or other minor reflexes. So let's forget about this for a second. This is our reality for AI, right? Self-driving cars. And they are not intelligent. These cars are not intelligent. They're intelligent assistants. They're huge, narrow, artificial intelligence that do one thing. The same computer that wins in Google Go, DeepMind against the gold player champion of the world, could not drive the car and vice versa. They do one thing. This thing schedules meetings called X that AI should try it. It's $100 a month and it looks to replace your assistant completely. Right now, not working. Kind of working. You have to be very patient. But you know, you can see where this is going. This is Google Gmail automatic responses, reading your emails, giving you suggestions for responses. You've seen that, I think, right? Just last week, Google Lens. You do a snap a pic. Go to your photos and tap the Google Lens icon right here. Bam. Mind boggling, right? You can take a photo and Google will tell you what it is, who it is, where it is. Of course, it only works on the Google system, right? It's too bad for other users. And then, of course, you have this AI is flying people in a drone in Dubai. I mean, it's mind boggling the thought that we can be flown in a machine. I would recommend you don't try this. This is not safe to try, but it looks good on video. And this is my favorite. It's called Do Not Pay. It's a bot that you can use in the UK and in New York to contest your parking ticket. You just give it all the details and it goes out and does the work for you. It files your refusal of the parking ticket. It files a claim against an airline for being late. It's called Do Not Pay. It does the whole thing automatically. It's like a lawyer in a box. I mean, any lawyers in the room, they have defeated 250,000 parking tickets in London in four months. That's not a work a lawyer should be doing. Well, at least not in the future. My colleague says algorithms outperform humans when it is not about understanding, about deep language, about emotions, and so on. You can basically safely say when it's not about 95% of things that matter. And this is why I'm not worried about the future. Let the algorithms outperform us on the monkey work. Why not? There will be a problem by call centers if we call that monkey work. That's not entirely the same, obviously. But routine work. There will be quite a few problems, people losing their job because of that. But technology will also create new jobs. And we're looking at things like Google Maps or Airbnb where you can have the algorithm define how much you're asking for per night. Those are sort of jobs that don't require humans. So this is what I think about AI. First, we have three kinds of intelligences. We have social intelligence, which means I meet the Prime Minister. I know that he's not the same guy that a taxi driver. I understand social connections. We have emotional intelligence, some of us. So we talk to our kids differently than we talk to our boss. This is hard to define what that is. Then we have intellectual intelligence. Like if I read a million books or a million pages of philosophy, I acquire a knowledge that's not just the print. The machine is on a whole different plane. The machine is here. It's a different kind of intelligence. We should be worried about this intelligence when they become so powerful that we cannot live without it. That we cannot get out of bed without connecting to the internet. Yes, that's a long time away. I know people are worried about AI taking over the world. I am too, but we're far away from this because what we do has very little to do with data. I like to say that anything that can be digitized or automated will be. This is the reality that you have to face in all big countries. I live in Switzerland. We have 7 million people. We don't have a lot of people in call centers or driving trucks or doing fast food. They will be replaced, but I'm sure Switzerland will take care of it. But total call center population around the world, 20 million people work in the call center business. 19 million of those will probably not have a job in 10 years. Because computers are not stupid anymore in 10 years. We have to address this problem. The interim problem, because we will also generate new jobs from this. But in the meantime, of course, these are going away. So anything that cannot be digitized or automated will become much more valuable. That's what we have to keep in mind when we think about our education. That is the true human nature of work. We're going to do work that can be automated. All of us do routine work. I'm trying to get rid of all routines. Use software for the stupid stuff. It doesn't require me to figure out how to best book my flights. The software can do that. I don't lose anything human. I cannot drive myself. I'm fine with not driving. But this, I don't want to lose. I mean, keep in mind that humans are the opposite of technology. We are inefficient. We're ambiguous. We say, yes, no, maybe hard to say. We make stuff up. We invent things. We do weird things. We do the unexpected. We have to sleep. We're tired. We are the most inefficient machine you can imagine, as far as efficiency goes. We should not take that away just because technology is saying we should be more efficient. Right? Then we'll turn into a machine. As I like to say about myself, you know, do I want to be a smart bot? I think smart is great, but human is better. If you could be both smart and human, I'm striving for that. Leave that for you, the judge. But there's a new era coming up for education. Also really crucial for India. It's about the EQ, the emotional quotient, the human part, and the IQ is not just about the IQ. I would wager to say that understanding technology is a fact of life that we cannot do without clearly. But understanding humanity is the thing that makes us different. I mean, that's the thing that we're going to do in the future when the machines will do absolutely everything else. A doctor that's using an artificial intelligence to look at cancer treatments will be a super doctor. Will the computer or the EIAI be responsible for the health of a patient? Will it be accountable? I don't think so. And it won't care. It's just zeros and ones. We need the doctor for this. So that's where we're going in our future. The biggest challenge is not that machines will kill us or take over the world or even take our jobs, even though that is a challenge. The biggest challenge is that we become like the machines. That we stop caring. That we do away with inefficiency. That we put technology rights over civil rights. That we invent things that we can't control. So the important plan for the future of India, as my view is, looking at doing some studying there in this exponential curve, I think it's important to put the human back inside, to think about what that means for the power of future, how technology is moving, to decide if we're going to be on team robot or team human. I use this word team human together with a few other writers, you know, that big part of this discussion. I think being on team human is a key differentiator for countries that are looking to be a big player in the future. Even when you make robots, you can be on team human. Team human means that you value what humans are and where you can go with this. So I have to cut short a little bit. I'll just go through this briefly. The future of life Institute funded by Elon Musk has come up with four basic principles. First, everything is about human values. First, second, everything is about shared benefit and prosperity. Third, we have to think of building ecosystems and fourth, those that build things are responsible. So that is very important for our future when it's about AI because AI is such a powerful tool that can be used for so many good things, but it has more power than nuclear energy. The issues of dealing with North Korea and nuclear power and nuclear bombs will pale in comparison to artificial intelligence and genomic human genome editing. So we have to address these issues. We have to use technology for human purposes. You know, what I loved about Steve Jobs in Rest in Peace and his early speeches in Apple is how often he said the word magic. It's kind of an empty thing now, everything is magic. When we use technology, we must focus on providing magic. We must reduce the manic, the obsession and we should ban the toxic. Toxic use of technology, weapons that can kill without human supervision. Who sells them? The UK and the US. These are weapons that can autonomously decide that this person is a terrorist. I would say that's a toxic use of technology that we should ban. So we have to really keep an eye on this, figure out where to go. We have to invest as much in humanity and AI, human intelligence than we invest in technology. That goes across the board for our schools, for our companies, for our brands. And finally, as I say in the book, we have to embrace technology but not become it. I'm going to close here and you can download this elaborate deck on my website and of course, let's see if you can get a book later. Thanks very much for listening. Thank you. Thank you so much. Good. Embrace technology but not be it. Amazing. I'd like to call upon on stage Mr. Sandeep Bell, Senior Director of NASCARM to present a token of our appreciation to GERD. Ladies and gentlemen, this is GERD's first visit to Bangalore and I really hope that you enjoy Bangalore. Congratulations. Congratulations. Thank you so much.