 Welcome to the AI for Good Global Summit. And I'm joined by a regular, many of you will know, Stuart Russell, professor of computer science at Berkeley. So we're back after four years for real and a real presence. What's changed? Goodness me, I mean, an enormous amount has changed. Hemingway has a nice phrase about bankruptcy, that it happens gradually and then suddenly. So gradually, various processes of governance have started to shift into gear. Global bodies have formed. Countries have talked to each other. Organizations like the United Nations, the OECD, the World Economic Forum have geared up to talk about AI. There have been lots of enunciated principles of how we should manage AI and so on. And most recently, the European Union has started to write legislation, the European Union AI Act, which should become law later this year. The suddenly part is generative AI. So I think, first of all, it was the text to image generators, like Doorly 2 and Mid Journey and so on, where you could describe something in words and it would generate an image. You could ask for photorealistic or cartoon or anime or whatever you wanted. And it would do something, often quite impressive results. I feel there's a butt here. Well, the level of understanding that these images show, it does show that the systems are able to understand the visual world in terms of objects and so on and then reassemble them into new configurations. The probably the most famous example is the astronaut riding a horse. And probably there isn't an image of an astronaut riding a horse anywhere in the training data, but it's able to understand what is an astronaut, what's a horse, how do I put them together to make an image. The astronauts' leg is on the proper side of the horse, obscuring the part of the horse that you wouldn't be able to see, and so on. So it's really remarkable. But we have noticed a few problems. People often have six or seven fingers. If you ask for a horse riding an astronaut, you get an astronaut riding a horse. So it's not perfectly understanding the text. But then what happened is the large language models. ChatGPT and all of its relatives came pretty soon after that and had a much bigger impact. It didn't feel like a toy. It didn't feel cute. It felt real. People interacting with it felt that there was some real intellect on the other side of the screen. What's your thoughts? It's hard to tell. So if you put it this way, if I pick up a piece of paper and read from it, what kind of guidelines on AI are needed and how can we ensure that they are adopted internationally? That sounds very intelligent, but it's a piece of paper. Nobody thinks the piece of paper is intelligent. They think, oh, a human wrote that. The piece of paper is displaying it. The large language models are trained on vast amounts of human text. And to some extent, they're acting as a piece of paper. They're conveying what intelligent humans wrote. But clearly they do more than that. So they're somewhere on the spectrum from a piece of paper to a somewhat human-like intellect. We just don't know how far along they are. They seem to be able to do creative things. You can ask them to explain the European Union AI act in the form of a Shakespeare sonnet. And they'll do a pretty good job of that. So is all this good or bad? That's what I'm trying to get at. Well, so they're good when used responsibly. OpenAI, which produced YATGPT, has a very long list of things you're not supposed to use it for. You're not supposed to have it give people legal advice or financial advice because you need to be a qualified human to do those things. And they've also tried to stop it from doing things it's not supposed to do, using bad words, appearing to be racist, and so on. They've done a fairly good job of that, but it's still possible to have a conversation where it behaves badly. So we've gone through seven years of these summits. So where do you see things in seven years from now? I personally believe that the root of the large language models, which has been to make them bigger and bigger and bigger and train them on more and more data, is coming to an end. It's starting to hit a brick wall. We are literally running out of text in the universe to train these systems on. The amount of text, and we don't know exactly, but the estimates that I've seen are comparable to all the books that the human race has ever written. And the publicly available text on the internet had to be supplemented with lots of other private archive sources to produce GPT-4, which is the next generation. And I think there are capabilities that these systems don't seem likely to exhibit. They don't seem to build a consistent internal model of the world. They don't seem to be able to reason in an extended way to build complex plans, and so on. So just to go very simply, it's sound bite weight. They're not going to replace humans. They're going to replace humans in a lot of economic roles. An awful lot of jobs involve language in, language out. My job, your job, the job of almost everybody in this building, is language in, language out. There are still some weaknesses. They hallucinate. You can't trust what they say. They mix stuff up with a straight face, so to speak. But everybody is working on ways to constrain their output to be consistent with a set of underlying facts so that they can be used in commercial settings and correctly reflect the price of products and the policies for insurance and all the rest of it. So that's going to move forward, and it's going to work. It's already threatening employment right now. The screenwriters are on strike in the US. Demanding that AI not be allowed to replace them in their work. So this is going to have a serious impact. But they are not human-like minds. I would say the way I think of them is sort of human-sized. They're sort of big, but shallow, and weirdly different from humans. But they're human-sized, and we just added billions of human-sized intellectual objects to the world in a few months. Fascinating. Professor Stuart Russell, thanks very much. We'll talk about this again again. I'm sure about what's changed since then or what, from now, very soon. Thanks again. And we'll have much more on AI for Goods Global Summit right here, so do stay tuned.