 So, without further ado, Daniel set the scene. Thank you. Thank you, Patrick. I'm very happy to be here. So, some people asked me how I could have written a 400-page book on AI when the topic is so new, right? It's been around for a year or so. As a matter of fact, AI wasn't born with chat GPT. And the basic idea in rudimentary form has been around for at least two centuries, if we go back to Charles Babbage and Lady Lovelace, and even longer if we go back to Jacques and Pascal Hobbs and Leibniz. In its modern form, AI was launched by Alan Turing in 1950 and was baptized in 1956. So it's about 70 years old. How can a little history help grasp the present situation? Well, first it dispels the notion that present-day AI systems came out of the blue, the outcome of a revelation that overnight changed the fate of mankind. Rather, it's the result of a long and windy process during which it ran into limits and was forced to abandon its initial assumptions and undergo a radical thinking. Instead of taking mental processes to be a kind of logic, it started seeing them as a kind of perception. Instead of trying to mimic the kind of thoughts that we entertain consciously, AI aimed for the sort of information that neurons can process, information to which we have no direct access. We don't know how we achieve such feats as recognizing our mother's face, for example, or how I can produce intelligible text that you seem to be able to understand. We just don't know how it happens. So instead of trying to directly turn to von Neumann architecture into a thinking machine, it chose to educate what's known as neural nets. Now another reason for remembering the birth of AI is the name it chose for itself, which masked an ambiguity. Was it aiming for intelligence or something else? Did you put a hyphen between artificial and intelligence or don't you? From day one there were two projects behind the project. One was to create a computational system that would think like humans, a thinking machine, and be intelligent in the sense where humans are intelligent. The other project was to find ways to automatize the solution to as many kinds of problems as possible, from chess to translation, from pattern recognition to robot navigation, and what we've seen on the video. On the face of it, these are two different things, two distinct goals. Yet the basic insight was that thinking is nothing really more than the ability to solve problems. A fully intelligent system would be one that could solve all kinds of problems. And conversely, the more problems a system could solve, the closer it would come to full intelligence. So AI set out to automatize one problem after the next. It turned out to be more difficult than expected. AI systems could not figure things out from scratch. They needed rich input, too rich to be spoon-fed by the human programmer. So they turned into neural nets that could learn by themselves from examples. And after a slow start, neural nets met with smashing success. But here's the thing. The systems that AI built, whether old-style reasoners or new wave perceivers, were special-purpose problem solvers, a population of specialized algorithms that did not add up to anything remotely resembling human intelligence. So it seemed that one of the two goals that AI had set for itself at the beginning had been dropped. The mainstream of the profession took that as a fact of life and still does. There are enough problems or tasks waiting to be automatized or to be automatized more efficiently to keep AI engineers busy. But the dream of a machine that would be genuinely intelligent, a true thinking machine, one that would possess what's known as artificial general intelligence, or AGI, or again human-level intelligence, is alive again. The advent of large language models and of generative AI has tipped the balance. The ability to compose on-command, coherent and often relevant text and images of any kind and any topic is not only as everyone was quick to realize a true game changer in terms of applications and countless domains. It also makes it more plausible that AGI, artificial general intelligence, might be within reach in just a few years. But now I get to be a little bit controversial. It is based on this idea that AGI's around the corner is based on two assumptions that are implausible. The first assumption is that the current victorious trend is bound to continue until the entire repertory of kinds of problems which the human mind can solve has been conquered by AI. The second assumption is that once that happens, human-level intelligence will have been reached. As for the first least implausible assumption, there are two grounds for caution. First, the current spectacular systems are far from perfect and far from fully understood. They are too fragile a basis for predicting future success. The second problem is that even if the present successes do herald further progress, which I grant, they don't support the idea that problems of all kinds are within reach. In fact, it's pretty clear that those which are obey some severe constraints. As for the promise that human-level intelligence is within reach, that's my second assumption which I think is implausible, I claim that it is in fact completely idle. I can only offer two arguments today, the time remaining. The first is that the most visible scientific leaders of AI today all agree on the need for some new insight in the absence of which AI will plateau. AI today may in fact be on the eve of a turning point similar to the neural net revolution, but it doesn't know yet where to turn. And the second reason I could advance is the observation that human intelligence, as Patrick was actually saying, it is in production, is only very partly a matter of problem solving. And I can't see how AI is presently conceived and do anything but solve problems. These two assumptions are not only implausible, they're also potentially harmful. They send the profession on a wild goose chase that of artificial, fully autonomous thinkers instead of sticking to what I take to be AI's major calling, which is to provide humankind with powerful, trustworthy auxiliaries that can help us overcome some of the present technical, scientific, social and political challenges, as well as facilitate daily tasks for which help is really needed. And these assumptions also facilitated a major falsification, making mechanical systems pass off as silicon-based, genuine human beings. The irony is that some people worry about the so-called existential risk posed by human level and in short order by superhuman level intelligence. As I see it, the worry is misplaced. What does worry me though is the combination of the unfounded belief that AGI's around the corner with a misplaced priority given to the goal of heavy AI implanted as in many contexts as possible for the sake of making use of such a wonderful tool, regardless of the broader consequences. In my view, the central challenge today is to turn AI into a regular engineering discipline, one which produces in a well-understood fashion trustworthy artifacts with built-in guardrails against improper use. Thank you. Thank you, Daniel, for trying to summarize. So what I understand, and we agree on the panel, today what we see is specific AI. We don't see the path to general AI, which remains a possibility but not today. So why do we talk so much about artificial intelligence is because, as you described, there is this breakthrough before we could have input in artificial intelligence that became simple and complex but the output was always simple. And in fact, with chat GPT we have complex output. What is a complex input and output is that you can take text, images, videos, sounds and you can produce the same which we couldn't do before. So that's the breakthrough. It has impact. It has impact not only when you play with your kids but it has impact in the enterprise and as we heard on the panel with Virginie Robert yesterday, it can interfere in the democratic processes. So it requires policies to accompany this development.