 That's me. It's easy to recognize the guy with the shirt, right? This is a Loa shirt. So very easy to spot me anywhere in the world. I always wear such things. So yes, today I'm going to talk about the fact that there is no artificial intelligence. I mean, I'm going to moderate a little bit what I'm talking about right now by the fact that the artificial intelligence that doesn't exist is the one that you are hearing about for about five, six, seven years in the media. So if you are doing any kind of AI, the one that you are doing is most likely still there. But the one that doesn't exist is the one that, unfortunately, we are hearing a lot about. And this is the one that made me write a book that is called There Is No Searching as Artificial Intelligence. First of all, I mean, in order to explain why there isn't the thing that they are talking about, we need to go through the history of AI. It's going to be quick, but artificial intelligence has really started in 1956. 1956, it was when the actual name was used for the first time. And it was done in Dartmouth University. And some people, they decided that they could model with mathematics, with a simple mathematical function, actually, they could model a neuron. So they said, if I have a neuron that I can model, then I can have a neural network. And if I have a neural network, I have a brain. If I have a brain, I have an intelligence. So obviously, this is totally stupid. And it didn't work. And I'm going to show you that later. But it's an issue that we start by calling this thing artificial intelligence when it has nothing to do. It had nothing to do with intelligence. So this is my main issue. Anyway, I'm going to go back to that. But first, I'm going to talk about what is it that we wanted to do really? What is it, this artificial intelligence that we want to create? It's, after all, some tools that are going to augment us, to do something for us. So we could actually go very, very, very far in history to find some tools that are interesting. But because I'm French, and because I'm very, very French, I'm going to start artificial intelligence in 1642. What was 1642? 1642 was the first computer, the first calculator, basically. It was done by Pascal, the Pascaline. And Pascal was a very smart guy, as you know. He was French, of course. And so Pascal created this first thing that was very, very simple, actually, but very, very useful. Because it was doing just simple addition, addition, and subtractions. But it's not that easy to do addition and subtraction. If I ask you right now what is 1792 plus 499, I'm pretty disappointed. It doesn't mean that you are not intelligent. You could think about that. I mean, the sure thing is that the machine there, in 1642, it would have done the results in less than three seconds. You, all the smart guys around the room, nobody got me the result. I'm very, very disappointed. But anyway, so the fact is that it might not mean that you are stupid. It might mean that this is not intelligence. We can think about that. Anyway, so let's go back to my friends in 1956. Not only they started with the wrong definition, let's say, but also they started to try to solve something that is very, very, very difficult. The most difficult thing, certainly, that you want to solve is natural language. So they started to say, we are going to build this computer that is going to understand natural language. And natural language is, certainly, again, the most difficult thing. This is a unique thing for us. Descartes said the language is something that is unique to the human being. So this is something that is very, very complex. And guess what happened? They failed, of course. So because it's way too complicated, it was way. It's still very, very complicated, but it was impossible to do. And what happened also at the time, after four years, basically, we entered something that we call the first winter of AI. What is the winter of AI? I mean, it was decided that we shouldn't fund this thing because they promised a lot of stuff. And they didn't deliver. So this is exactly what could happen today if we continue to say bullshit about AI. Because today, a lot of people are saying a lot of bullshit. And this is exactly what I'm saying that we shouldn't call that AI because we are going to disappoint a lot of people if we continue to promise the stuff that we are promising. So I don't want to enter a new winter of AI because our AI, my AI, the one that I'm working on for 30 years, exists. And it's very interesting. It's going to create some incredible systems. We shouldn't stop because some stupid people are saying bullshit. OK? That said, in the 70s, it continued. Actually, I mean, some people continue to try to model something that could look like a brain. And it was what we were calling at the time expert systems. So it's basically logic system. It's rules, rule-based systems. So expert systems, it went through the 60s, 1780s. I'm moving that away a little bit. 70s, 80s, 90s. And we had a lot of very, very good programs, actually, that were working pretty well, right? Rule-based, pretty simple. And actually, one, the thing that we saw, the best program ever was certainly something in 1997 that was called Deep Blue. Deep Blue was this IBM machine that actually defeated Kasparov at chess. Chess, very, very smart. You know, chess, right? Chess is smart. Chess is 10 at the power of 49 position on the board, right? About. 10 at the power of 49 positions. It's a lot of rules, but simple rules when you think about that. And it's very easy, after all, to model those expert system things. And also, at the time, in the mid-90s, it was the very first time we had calculators that could actually have enough power to calculate from a position to victory, basically, because we were able to model every single of the 10 at the power of 49 position on the board. So it wasn't that intelligent, actually. This is what I'm claiming. Anyway, it was pretty impressive. And it's something that we need to recognize. But I don't think that it was that there was any intelligence there. In the 90s, I mean, the thing that continued, you know, were people that were actually hiding doing neural networks. I was one of them. It wasn't very popular to do neural networks in the 90s, but we were still trying, because we were thinking that maybe there might be something. And so it's called, now, machine learning. But something happened, actually, in the mid-90s. Something happened that is very, very interesting. It's called the internet. And the internet is very interesting. Why? Because, I mean, those neural networks, what are they? I mean, they are based on data. And the internet is certainly the biggest database in the world. And of course, when you think about deep learning now, this is something that needs even more data. And in 2007, thanks to a lot of things that were happening on the internet and a lot of data that was on the internet, we were able to prove some of the theories that we had for the past, let's say, 20 years. Here, I'm going to lose about half of you, but I'm going to bring you back in the thing after that. But I'm going to lose you, and I'm going to say that on the internet, there were a lot of cats. Images of cats, right? So I mean, personally, I hate cats. So I don't really care. But the thing is that it was very interesting that there were cats on the internet, because people who were taking pictures of cats, and they were saying, oh, this is my kitty, my kitty, my cat. And the good thing is that they built an incredible large database of cats. And it was a base of cats that was actually annotated. So we knew that it was a grand truth, right? Images of cats. Very good. Now I'm bringing you back. Those cats, they allowed us to basically verify those methods that we had in our minds, because we had now those large databases. And we succeeded to create the very first cat recognition system with about 100,000 images of cats. Impressive. So with 100,000 images of cats, we were able to recognize cats in an image at about 98%. Pretty good. Is it that good? Do you have any idea what is it that we need us, humans? How many cats do we need to recognize cats forever? One. One. According to the psychologist, it's two. It's not far. Pretty good. But as you compare one or two to 100,000, this is a difference, right? And what I'm saying here is basically those systems that need 100,000 images, they are totally different from our system. We need two. Again, when you think about it, we don't need two images of cats. We need two instances of cats, which is very, very different. Because an image of cat is something that is just flat and that is something that is coming from a picture. An instance of cat is something that you see going around. It's not going that way, right? It's something that has many more parameters that you, as a human being, you can integrate in a lot of different dimensions, in a lot of different contexts, that we don't give to those machines when we do just image recognition, but that we might do one day. But we don't do today because multimodality is very, very complex. But think about that. But anyway, so bottom line, those very smart AI systems, they are pretty stupid. Because they need that many images, right? Another example of that is even more interesting, I think. And it happened a few years later. It's when this beautiful machine here, you can admire my graphics, right? At the same time, please. I mean, thank you very much. So this machine here is DeepMind, OK? DeepMind with running AlphaGo in 2016. In 2016, AlphaGo would be the best world champion at Go. Go. That's an intelligent game, right? Go is, you have to be very, very intelligent for that. What is interesting in Go is that this time, it's not 10 out of the power of 49 positions that you have on the board. It's a little bit more, OK? If you ask mathematicians, they are going to tell you that you have in Go between 10 of the power of 172 and 10 of the power of 542. So what does that mean? What I just said, it means that we have no fucking idea, OK? We don't know. We just don't know. But let's say that it's 10 of the power of 200. 10 of the power of 200. There are some people who want to give this very confidence, you know, they are saying, oh, 10 of the power of 200, it's only four times more than 10 of the power of 50. I'm happy you are laughing because sometimes I'm not. Because when they say that, I think, OK, so let's go back to the beginning, right? So anyway, so 10 of the power of 200. Let's say that it's that. And basically, it's infinite. See what it is, right? I mean, do you know how many atoms there is on our world? 10 of the power of 136, OK? So 10 of the power of 200. So let's say it's infinite. It's a lot of things. So I'm not going to go into the way the machine actually defeated this guy, you know, the 19-year-old Korean champion. So the machine was good not because of the memory size or because of the calculation. But it was good because we used some of those techniques here that were deep learning techniques because we gave, basically, to the machine 30,000 games of go that were put plus the rules. And then it did defeat the thing. But we don't care about techniques. What is interesting is to understand this guy here. This guy, it's 1,500 CPUs. CPUs, chips in your computer, right? 1,500 CPUs, 300 GPUs. GPUs, same thing, chips. But at that time, it's for the screen, for the displays. And it allows you a mathematical calculation, right? So a little bit stronger. And 30 TPUs for the ones who are doing AI. TPUs is basically TensorFlow processing unit. This is specifically to do TensorFlow. This is a chip done by Google to do TensorFlow, right? It's a technique for deep learning. OK, so basically, it's 2,000 computers. It's a small data center. It's 440 kilowatts. 440 kilowatts to play go. 440 kilowatts to play go. Interesting. But what is much more interesting is this guy. You, and most of you. Do you know how many, how much do we have there in this thing? Do you have an idea? What is it? You know? Depends, again, but it's between 20 and 25. What? 20,000 times less. 20,000 times less. And this guy does something else than playing go, hopefully. He does a lot of stuff. This thing only plays go. That's an issue. It's a big issue. I'll come back to that. So anyway, 2016 again, we are playing with all those data, a lot, a lot of data. And everything now, all the AI is about data, right? And it's a big issue because we can have data bias. What is that? You can choose the wrong data sometimes when you program. Now you program with data in several programming rules, right? And if you choose the wrong data, you can have some very bad systems. And this is an issue when you are going to create one and have an example of one of them. This is this one. This is a chatbot created by Microsoft in 2016. You heard about it. Maybe it was Thai, T-A-Y, Thai, a chatbot that was going on Twitter to promote the Microsoft products. Very good idea. In the history of chatbots, it's pretty short. In the history of chatbot, it became instantaneously the most racist and sexist chatbot in the world. It didn't solve, basically, everybody that it was talking to, especially if they were black or female. Why? Why does it happen? It happens because of this data bias thing that I was talking about. But there were actually two issues. First issue, pretty simple. It's only a classic programming issue, so it's a bug. The bug was basically when you create a chatbot, you create something that is going to integrate well in your audience. So it has to talk like your audience. And it happens that I heard, because I'm not on Twitter, that on Twitter, after two or three interactions, you insult each other. So here, the adaptability factor that was for the conversation was pretty high. Pretty easy fix. You lower the adaptability factor, and then it's going to talk like you want it to talk. OK, easy. The other one is much more difficult. It's the data. I was talking about the cats earlier, that there are a lot of cats on the internet. It's easy to find and so on. It's not easy to find conversation, annotated conversation on the internet. Very complicated. And there are actually none that you can really rely upon. But it happens that there is a database that is very well known for people who are doing speech recognition, who are doing natural language, that is called switchboard. This is a database that exists for years in the States. And this is an annotated database or conversation of real people talking over the phone to call centers. So a lot, a lot of data. So of course, the Microsoft guys, they needed some kind of those conversations. They took a subset of that. And the subset that they took, potentially, is the conversation in the 50s in the South States of the USA. I'm happy that you are laughing, too, because sometimes when I say that, especially in the States, yeah. Anyway, so you got it. The thing by default, it was racist. It started racist. It was trained on racist database, basically. So be very, very, very, very careful. OK, I'm going to talk about this one here. This one, that's not supposed to be like that, but that's OK. So imagine here, you have a black box. OK, it's weird. Black box, artificial intelligence, you cannot explain it. It's a black box. OK, so I have one scoop. I have two scoops for you today, so this is the first one. First one, there is no inexperiability of AI. We can explain AI. It's complicated, but you can explain AI. So it means that AI is done by us, for us, and we have the control. I will repeat that several times today. But when you create a system, whoever creates it, they can basically explain everything mathematically, because it's only math, as I said before, right? It's only rules, logics. Or it's only statistics that are based on the data, and the algorithms are going to be working on those data. And we can explain actually the step by step. The only issue is that this is very difficult to explain if you want to do it practically. Because at every single step, the computer that we are talking about is going to do something like millions of calculations a second. So practically, you cannot explain it, because you cannot follow that. You will need a few hundred years or 1,000 years to explain the actual thing. But mathematically, you can explain it. I'm going to give you an example, actually, of another case that is easier to understand now, because all those inexperiability things happens in the history all the time. 1914. 1914. It's not that long. There is a very good mathematician. The name was Gaston Julia, a very good guy. Gaston Julia invented or discovered the fractals. People here, if you know what the fractal is, you have immediately an image in your head that forms. So you know what the fractal is. The other one who don't know what the fractal is, go to Wikipedia. Gaston Julia log, very simple equation, actually, that shows that every iteration is going to give the same result of the previous iteration. It's basically what it is. So it's not very easy to explain if you are not a mathematician. So it was mathematically explainable, but practically unexplainable. 40 years later, 1955, Julia is a teacher at Polytechnic not too far from here. And he has a student who is called Mandelbrot. Mandelbrot is a good student. He listens well. He goes to the States. He goes to IBM. He's given a machine that can plot any kind of equation. And he plots the equation and appears, this image that you had in your head when I talked about fractal the first time. And this image is obvious. Because you see what it is. You see that actually every single iteration of the image when you take a magnifying glass on the way up. So you take that, you go inside, and you see at every single level of iteration, you see the same drawing that you had before. Very impressive, very interesting, very nice actually, very, very cute. But what was unexplainable before, 40 years later, became explainable very, very simply through a drawing. I'm not saying that in 40 years we'll be able to explain AI through a single simple drawing. But I'm saying that the other people are working on that, and that I'm sure that at one point we'll be able to explain what we think today that is unexplainable, OK? OK, since I'm in the part where I'm a little bit, there it is. That's good. So it was there as well. OK, that doesn't matter. OK, so next step is I'm going to talk about one of my favorite subject. It's the autonomous car. Autonomous driving. So this is the second scoop of the day, right? So first scoop of the day was there is no inexplicability. Second scoop of the day. Autonomous car will never exist. Never. Not in 200 years, not in 2,000 years. Never. You spell never like never, OK? And I'm going to prove it. And of course, I'm going to explain what is the autonomous car. I'm not talking about level four. You know, maybe that there are five levels today, you know, one, two, three, four, five. What I'm saying is that level five will never exist. Not with the methods that we are talking about, which are those methods that are just mathematics, right? One, two, three, four, five. Five is full autonomy. The thing will handle everything, anything, at any point with anybody not touching anything in the car. We just know go. This one will never exist. Level four, that is going to be very, very, very good, much better than us, is going to exist. And to be great, because it's going to save a lot, a lot of lives. So we shouldn't lie on level five, because level four will be very, very good. But why is level five never going to exist? Two examples. The first one is maybe a little bit more difficult here. It's easier when I'm in Paris. The first one is called Place de L'Étoile at 6 PM. Place de L'Étoile at 6 PM, you know, you have a lot of cars everywhere. You know, you have 10 or 12 avenues coming in this thing, you know? And it's deadlock. Yeah, it's a deadlock. So it's not what you learn, you know, when you go to learn how to drive. It's not that. It's sociology, you know? It's negotiation. It's everything you can think about, but knowing how to drive. OK? So you're going to tell me, yeah, of course. But you know, I could program the thing that it's going to do thanks to V2X. You know, V2X is vehicular communication. Thanks to V2X, it's going to negotiate, and it's going to do all that. So yes, be my guest. Do it. So you're going to create in two years a very nice car that is going to be able to go through the Place de L'Étoile in two hours. Congratulations. Now, this big car, take it to Bangalore. I don't know if you ever saw how to drive in Bangalore. You know? But if you saw that it was difficult at Place de L'Étoile, Bangalore is Olympic Games. OK? It's another level. So the same car won't work there. And what I'm saying is that there will always be an exception. There will always be something that is make your car stop or not working correctly. And I have actually an example that is even better than this one. And this is what I found, you know, from Waymo. Waymo is certainly the most advanced company today in autonomous driving, right? They are doing that for 10 years. They drove with their car more than 11 million miles. 11 million miles. It's a lot of miles, right? And they did record everything. And recently, the CEO of Waymo said in March this year, he said level five will never exist. 10 years. Good. Took him 10 years, but that's OK. So at least he said it. But another thing that he did, he did release every single video of the 11 million miles on YouTube. So you can go to watch those 11 million miles yourself. It's boring. Little bit boring. It's basically everything that is inside Palo Alto and Mountain View and something like that. And you see the car going around, and they are collecting data and collecting data. Very good. But of course, I'm going to watch it because I love it. My son tells me that I'm a little bit stupid, but I still love it, you know, I'm like that, looking for the gem, right? And I found one of the gems. I'm sure there are many, but I found one that is just incredible. So you see this car, you have a camera that is behind the windshield. And the car is driving through the roads of Mountain View or Palo Alto, I don't know where. Totally boring. Nothing happens. And in the middle of nowhere, with no reason, the car stops. And then after two seconds, the thing goes for two meters and stops. And then again, two meters stops. And then we feel that the operator, because there is a guy behind the wheel, he takes back the control and drives away. Because the guys behind him, they were a little bit nervous, most likely. What happens? So now you look at the video a little bit closely, and you see that there are two guys walking on the sidewalk. Two guys walking on the sidewalk. It's not that an issue. But then you look a little bit closer again. And one of the two guys on the sidewalk, on his shoulder, he has a stop sign. So this case, of course today, there is something wrong with the rule in the thing that says, if there is a jerk on the side with a stop sign, don't stop. So it's there. It's good. But I think that you, if you see someone on the sidewalk walking with a stop sign, you don't stop, right? But the reality is that there is always a case somewhere that is not going to be in the database. Because all these things that we are talking about, it's based on rules or data. The rules we build them, the data, they exist. And frankly, I mean, the data of having a guy with a stop sign on his shoulder is difficult to imagine or to capture somewhere that is not going to be in the database. Now, something really wrong with these rules is that there will be no painting in there. So I think that this is what's really important, because here is a very good that the birds follow along everywhere. And here is a very easy way of assembling. A simple way of forming a percentage of a B-over Правが en fram equitable Festival is work. Next, usually I do a little chapter in on Siri, right, because I mean everybody is saying, you know, this guy is the grandfather of Siri or whatever. So, okay, so I need to explain a little bit. And somewhere, you know, I take the blame if people think that computers can be smart, you know, because the movie Her that you most like so, you know, is something that is coming directly from Siri. And this is something that, you know, makes believe that potentially you can fall in love to, you know, speech assistant, which I hope you won't do. And this thing, you know, can be anything for you, right? It's not true, okay? This is actually a stupid system, okay? And because we realized that it was a stupid system, actually made it more stupid than what it was to make it more human. When you didn't succeed to do that earlier, you showed you were human or stupid. So what is it that I'm talking about here? Basically, we knew that the speech recognition was bad enough that it was only about 80% Siri, the very first version was in 1997, okay, the very first patent. So it was a long time ago. It went to the public only in 2010, 11, but in 1997. In 1997, the speech recognition was about 80% accurate. It's not a lot, right? 80% accurate. Imagine a book where you have 300 words in a page, you have 60 of them that are missing. It's not easy to understand the page, right? So it was exactly what it was. So we came up with an idea saying that we are not better than the others. The others were saying they were much better and they were saying 80% is pretty good and so on. We said 80% is shit. But what we are going to do, we are going to make people believe that it's not that shit, you know, that it's actually pretty good. How are you going to do that? You are going to theorize the thing by saying you are going to create artificial stupidity. So what is artificial stupidity? Artificial stupidity is based on the theory that I worked on for years that I called the nightclub paradigm. This is the second time when I lose people usually, okay? So nightclub paradigm. What is the nightclub paradigm? Nightclub paradigm, a lot of noise. A little bit drunk. Okay. And then you know there is a guy that is talking to you, you know, interesting. There is something to do, right? So I mean, we are talking and interesting. The guy talking, talking, talking. And you don't understand anything. I mean, about, you know, 80% of what he says. But, you know, you want to be socially, you know, nice and so on. And after a while, you know, the guy really insists, you know, so you need to do something. So what is the very first step of artificial stupidity? It's the first step. Easy. Okay. Second step. The guy continues because he feels that you are engaged, right? So he continues to talk to you. Second step. After a while, you have your self-esteem, right? Somewhere. So I mean, you need to do something. And what you're going to do, you're going to talk, right? And you're not going to be able to talk about anything that he talked about because you don't have no fucking idea what it is, right? So you are going to talk about something else. And you are going to tell him a joke in order to do something else. And the guy, you know, doesn't matter. He's drunk as well. So, but at the end of the day, you have a conversation or it looks like a conversation. This is exactly what we did with Siri. We added, you know, those little jokes or a little thing, you know, that made Siri kind of engaging where she didn't understand anything. Okay. So that was one of the things. But that said, I mean, this technology wasn't that stupid in the sense that we actually saved lives with Siri. So people won't believe it, right? But I'm going to tell you one story, but there are many of them. One story is one guy one day, you know, was in the middle of the states, in the South states somewhere, and he was working on his car. And he was behind the car, you know, it was hot. Nobody, you know, fortified around, right? He was working on the car and the car, you know, falls on the guy. The leg sectioned, right? Blood is going out. He's dead in, you know, half an hour. Can move. The car is there. Done. Until he thinks it's good to have a brain sometimes. So he thinks, what? He thinks, oh, in my back pocket, I have Siri. Hey, Siri, call 911. Done. Okay. He actually told the story, so he's still alive. Okay. So anyway, sometimes technology is good, you know, and AI can save lives. Okay. So AI, I'm talking about AI, AI, AI all the time. So I want to keep the AI letters. I don't want to keep the artificial intelligence name. Okay. So I'm going to call it augmented intelligence. It's not the domain that I'm going to call augmented intelligence. It's the fact that this AI that I'm talking about is augmenting us. This is our AI. This is us. This is us controlling it. This is us deciding what to do with it. And it's a tool, and it's only a tool that we fully control. Like every other tool, we can decide to make something bad with it. This is our choice. This is not the choice of AI. This is our choice. Any tool. Let's say, you know, a hammer. Hammer is a good tool, right? It's a pretty old tool. It's better than, you know, trying to put a nail in the wood or something with your finger or with your hand, right? So a hammer is good. A hammer is good until you use it, you know, to hit the guy on the head. Not good. That's why you need sometimes regulation. So you are going to need regulation for every single technology. AI will need regulation as well, okay? To understand the power of the thing and to understand what you cannot do with it. But this is you not doing it, not doing something with it, okay? We are in charge. And to sum up this thing, I'm going to draw, if you like this thing, this image, the next one is even better. Okay? I like the teasing part, right? Beautiful. I'm going to draw intelligence. Okay? So we have here, you know, on the, on this axis here, we have the level of intelligence, zero, Trump, 100, genius. Okay? Here we have the domains of intelligence. Mathematics, you know, goal, chess, okay? Domains of intelligence. And here this is us. We are pretty good. We are pretty good. We're okay. In good shape, right? We're not geniuses, but we are better than Trump. So we are, we are pretty good, right? And what is interesting here is that not only we are pretty good, but we are pretty good at everything. If I look at each single pixel there, you know, we are good at everything. Everything, I'll say, we have an opinion. We can say something, but even if we don't know anything, we can say something, right? So we are good at everything. And what is even more interesting, it's continuous, mathematically, you know, if you take it mathematically. But what is even more interesting is that it's continuous and infinite. It means that today, right now, if you invent something, right? Collectively, we are all going to be able to say something about it, because we can. And this is our brain. We invent stuff all the time, right, over the course of our history. So we are continuous and infinite. So let's look now at AIs, because we say AIs, okay? The one at chess. So this is me at chess. I drew the thing, so I do whatever I want, right? So I'm pretty good, right? But where is the AI at chess? Of course it's there. It does every single, you know, position on the thing. So it knows everything. It's a genius, right? Pretty good. Where is Go? Go is interesting, because it's more than a genius, because there are so many, you know, positions possible, and so many that we didn't see in the humanity yet, that this thing is, you know, superhuman, right? And that's fine, but it's only at Go. It's 440. Okay, what? Unbelievable. Driving, level four, you know, not level five, level five, remember? It won't ever exist, okay? Pretty good. This is us, you know, we are driving, but we are texting while driving. We are a little bit drunk, you know? So it's better. So let's build it, okay? Be sure to build it. This one is something else. I will talk to it a little bit later. And, you know, anything, we are going to be able to just build something that is better than us on every single domain. But what is it that I just drew here? I just drew, you know, for every single domain, a very specialized AI, right? So for every one of them, potentially you have to build something that is going to be, you know, 440k watts, which is way too much. And this is what is called discrete in mathematics, right? So instead of continuous, like it was for us, it's discrete. So you need an infinity of discrete points like that, you know, in order to create the infinity. It's going to be a lot, right? In relation to that, I mean, when you look at some things there, here, if I drew that, it's because to say that some of the domains, they are going to be very, very close. And some people say, yeah, so we are going to be able to do some transfer learning from one domain to another. I say, yeah, very, very good. So we are going to be able to learn some of those things. Unfortunately, between two domains, you know, that are very close, there is actually, again, the infinite there. So there will be cases that are going to be okay in some domains that you cannot translate in another domain, actually. So it's not going to work well. But even more interesting is to look here. What is it here? Where is the AI there? Where are the AI there? Where do you think they are? Nowhere. Why? There's no rules. There's no data. There's nothing yet. We didn't invent it yet. We didn't invent it yet. The day we will invent it, it will create an AI that will be better than us. But we have to wait a little bit before that. Anyway, so with the current methods, which are mathematics, you know, logic and statistics, the AI, we are in full control side. And this is our game to lose. We decide, you know, to give it and to give the keys to AI, we can, we can say, you know, do whatever you want. But we will decide. There used to be all this kind of talk about logic-based AI. I mean, is there a hope that it will be mixed with database AI and kind of go much further than we are? So I think that, I mean, you know, we started actually with the statistical one. We stopped. Then we went to the logic one, which was the expert system basically. Then we, you know, went back to something that seems to be very interesting, which is, you know, those deep learning machine learning things based on data, right? And now actually, there is a new school that says, okay, we need to start to combine again, you know, those two things because we are going to get, I mean, very good results. So there are a lot of people, you know, working on that right now, you know, to combine again, I would say, because some people tried actually in the past as well. But now the competition level is at such a point that we can really try to do it. We are going to get some better results, I'm sure, because as soon as you, you know, combine something, usually, you know, the combination is better than the thing by itself, right? So I'm sure we'll get some better results. But I'm sure that we will never, you know, go, you know, higher than those things. I mean, we will never go to perfect on everything. If there is a way to create a new AI, so an AI that will look like us one day, maybe, you know, like the thing that we have there, first of all, we need to be sure that we're not going to use that much there, okay? Because using big data is not the solution. Using big data, we saw, you know, we go directly in the wall, you know, when you, when we think about the energy, right? So we need to be very, very careful about that. So we need to have another paradigm that is not going to be math. And what is it? I mean, I don't know, but I feel that it's going to be something that is closer to what is up there. So it's going to be closer to biology, you know, maybe, you know, people are thinking about quantum physics, maybe something like that, that is going to be much closer to this thing here. So if you want to be optimistic, one day, maybe, you know, they have something that will look like us, you know, but it will have to use totally other paradigm than the mathematics. Staying with all-styled data-based AI, and you look at the right side of your diagram, economists are talking a lot, not thinking a lot, but talking a lot about data barriers to entry. And there is this big debate between those people who just say, you know, you don't need that many data anyway because of the large numbers. If you have 20 or 100 data, that's enough. Some other people will say, no, no, because it's more complex, it might be exponential, the more, the challenging ones require even more data. And what's your view on that? And how will you solve it if there is actually a data barrier to entry? So my view on that is, today, the easy way is to do the big, big, big, and bigger data. This is the easy way, but I mean, we all know, I mean, because we need to know a little bit of stats, we know very well that after a while it plateaus anyway. So when I was talking about 100,000 cats that we need to know for the cats, if we take 1 million cats, it's going to be pretty much the same. It doesn't matter. So it's going to plateau. But for some very complex problems with a lot of parameters, that is much more than a 2D image, I mean, it's easy to add a lot, a lot of data, and so you need actually to solve those programs a lot, a lot, a lot of data. And this is the current race that is happening right now, the race to the data. My issue is that is what I just said before, which is that race of data, this big data, is going to go directly in the wall because of the energy that is required to store and compute. Okay? That said, now, there are people that are thinking about small data or smaller data, right? So this is where we are thinking, you know, okay, we don't need that much data in order to create something that is going to be, you know, pretty much the same result than the 100,000 cats, but with only, you know, 100 cats. So that is statistically not something that we understand very well because by definition, statistics need a certain number of it to be relevant, right? So in this case, there are people that are trying to, you know, go back to this small simple thing in order to try to do another kind of statistics that is going to make it work. So this is a current thing that people are working on. And I think that this is a very, very interesting area where we should go because if we don't go there, we are going to, you know, explore the planet with the energy that we need. So this is a question about regulation, which is something you've talked about. So again, AI, you know, in many cases, there's a lot of complex parameters that come into the calculation of that transform inputs into outputs. So we observe sometimes the inputs, sometimes we don't observe the inputs, we always observe the outputs, and we have to regulate the outputs. So how do you think, you know, is the kind of the best way is it to go into the algorithm and try to understand what it does? Or is it to try to like to understand the training process or something like that, right? So there are many aspects, you know, in the way we could do it. So as I said before also, I mean, we will need to have some algorithms that are going to help us to find to explain for the explainability because explaining is halfway, you know, to adopting, right? So we are going to need that. And it won't be a simple image, you know, the fractal image, but it will be something else. And there are some AI's that can actually help, you know, to explain, right? Because, I mean, they are going to be able to compete with the speed and the size of those things, right? So there will be that. And it will be the thing that is going to inspect basically the algorithms, right? So, but that might be too complex even for the regulator, right? Because the regulator doesn't have the, you know, the brain power to understand, you know, everything all the time, right? So we need sometimes to just this time explain to the public, to the people, you know, that are going to be using those things. What is it that we are trying to do? In simple terms. And education in this case is what could be the best regulation. And vice versa. Regulation is going to be the best education, okay? So that's what I really believe, you know, we should do at one point is that to set the expectations. For instance, you know, today when we talk about autonomous cars and we expect, we the public expect that there will be pretty much zero, you know, accident, which is a totally stupid expectation, right? So, I mean, there will be accidents in any case, right? So I believe and we know it's for sure that there will be much less accidents with four cars, you know, than with what we have today. But people that don't expect that because I said it's a robot, the robot has to be perfect. We have to explain that the robot cannot be perfect, you know, and that even if we have 10% accidents, you know, that's life. Going back on Dan's question here, what is it you think that social scientists in general and economists in particular should focus on? I mean, what are you feeling that are the big gaps from your? I love them. I'm not asking for love. I'm asking for directions. So what I believe, I mean, strongly, I believe in a multidisciplinary approach of any problem in science in general, I mean, any problem in general. So what I like about, you know, economists and scientists and whatever it is, is that they are going to bring another point of view that what, you know, the scientists are going to bring and that they are going to think, right? So they are going to have this other perspective that is going to help to see the world in another way and to see the problem in another way. I love, you know, when I work with other people, people like me, right? So this is the very, very first thing. Why don't we never talk about the impact of technology on the climate and environment? You talked about it like a little bit. Because it's, you know, the digital economy, by definition, you know, you don't feel it. It's not tangible, right? So you have the feeling that when you, when you have the services that you have on your phone, it's not, it's just on this little thing, you know, and it is nothing. Actually, you know, people who know, I mean, most of you here, but I mean, this is only the elite, right? They know that in the back, in the back end, there are big servers, you know, doing a lot of things. You know, you perform whatever you are doing there. But we don't see it. Because it seems all transparent, because by definition, this is digital, right? This is something that we don't feel. And this is why we don't talk about it. Because it's easy as well, right? It's there, we do it. There is one example, you know, that I like to talk about all the time. This is a selfies. I hate selfies. So don't ask me selfies, you know, at the end of the thing, because the answer will be no, okay? Why? A selfie, a single selfie, okay? If you take it, of course, you know, you are not taking it for yourself, hopefully. And then, you know, you are going to share it. It's going to go up there, you know, the server is going to be shared with a lot of people, you know, and you are going to brag about it, blah, blah, blah. A single selfie equals a 60-watt bulb burning for 24 hours. There is one billion selfies taken every day. This is ridiculous. So think about it. Stop the selfies. Yeah, I have one question. So you said mistakes will happen, accidents will happen. So do you think accountability is a problem? Yeah, I mean, so now, you know, we can talk about some very interesting things. Who is going to be accountable? Is it the car? Is it the guy who owns the car? Is it the car builder? Is it, you know, the guy who just crossed the road? So, of course, there we are going to have somehow somewhere regulations. So accountability is going to go with regulations somehow. So we are going to have to say something about it. I'm not the regulator. I don't know. I have my feeling, you know, I feel that, you know, somehow life is random, you know. So I don't want the car to say, you know, I'm the strongest one and I'm going to protect the guys, you know, in my car and they don't care about the rest of the world. So I don't think it's the right thing to do. I think random is the best thing to do. But I mean, the regulator will have to regulate and say, you know, car makers, this is the way it is. But you public, you know, there will be accidents, be ready for that. Don't expect that the car is going to protect you 100% of the time. Just, what is it you're doing at Samsung on AI? I mean, is it just trying to build as many of these colored bars that are on the graph there, adding stuff or is it something different? So I never talk about Samsung, but I'm going to tell you what I'm doing at Samsung. I'm doing something that I feel me personally with my ethical view of life, you know, is the good thing to do. What is the good thing to do is not to create yet, you know, another 10,000 of those bars, because, you know, it's going directly in the world. So what I'm trying to do is actually to create something that is going to look at AI on a no, I mean, going in another direction of AI, not in the biology one, because I don't understand anything about biology. But with the mathematics, as I said before, trying to shrink the models, trying to make those models, you know, less data, having those things being done at the age instead of, you know, on the servers, you know, that are going to be all centralized. So trying to decentralize. So all those problems that I'm trying to do, which are basically trying to save the planet, you know, through AI, but keeping the promise of AI, because we have a lot, a lot of things to do that are going to be incredible. But we need to be just careful when we're doing it. I mean, creating yet another, you know, go machine at 440 kilowatts to be a very, very, very stupid thing to do. And this is unfortunately what people just did with yet another game recently. Yes. I was wondering, what is your view on the use of AI in things like art or music? Do you think AI would be better than human in creating this kind of stuff? Would it be possible for AI to exhaust all the possibilities of music composition one day? Yeah, random is going to be nice, huh? It's going to sound very good. Yeah. Okay, what I think, I think that AI doesn't create. AI doesn't invent. AI is not creative. AI is not interesting if you ask it to do something that was never done before. Okay, this is what I think. So now, you know, people think, yeah, but look, I mean, this is unbelievable. They think, you know, it did create a song like the Beatles. You know, great. Perfect. Why? Okay, why? Because I get it, you know, all the Beatles songs, you know, over the 60 years or whatever, you know, they think. And, you know, it created a song like the Beatles. Great. Good.