 Great, so thanks for that introduction, that makes me feel much more important and influential than I really am, but I'll try and live up to it. I was asked to come and talk about AI and everything else and do it in 25 minutes and not talk too fast, and I'll probably manage two out of those three, but hopefully I'll say something interesting along the way. I thought it might be a good place to start with Bill Gates, which may be a familiar name to some of the older people in the room. Bill Gates this spring said that he's seen two things that blew his mind as demos of technology. One was the graphical user interface when he went to Xerox PARC in the late 70s, and the other was chat GPT, and I think that's a pretty good way of summing up how excited the tech industry has become around generative AI. I probably don't need to tell that to most people in this room, but this AI thing is quite a big deal. This is excitement on hacker news, how many engineers and geeks are posting stories about things. I think it's symbolic that AI is now getting much more traffic, much more excitement than the iPhone or the Jesus phone did back in 2008. It's a good indication that this might be a new cycle or a new shift in what people are interested in. If you are building a startup now and you're not working on generative AI, you'll get a generative AI, you'll get slightly pitying expressions in Silicon Valley, like there's probably something wrong with you. There's now almost all of the companies in the new Y Combinator batch are working on generative AI. In fact, it's interesting that we've got this surge in investment and company creation in AI when the rest of the venture industry is now going through a slump as we come out of the pandemic and the bubble in investment that happened during the lockdown. Of course, that extends outside startup land. A little company called NVIDIA is having a lot of trouble just keeping up with demand as their business explodes, as everyone tries to buy GPUs to train these models. And the hyperscalers are spending a lot of money. It looks like this year Google, AWS and Microsoft will spend something over $100 billion building new data center capacity. So there's quite a lot of money, quite a lot of excitement going into this. And this also spreads outside the tech industry. So this is a survey from McKinsey on corporate management. Have you actually tried chat GPT? And you're kind of getting to the point that you don't want to admit in public that you haven't tried generative AI. People will kind of look at you and shame you and maybe fire you. And so everyone's investing, everyone's looking at this, everyone's excited. Open AI report, they have 100 million weekly active users, something over a billion dollars of run rate revenue. And people in the other big growth industry, tech regulation are rushing to regulate this stuff as well. So we've got executive orders and laws in the EU, summits in the UK, new rules coming out of China. So everyone is very excited and thinks this is very important. Everyone in the tech industry has sort of spent the last 12 months walking around with their holding onto the top of their head with both hands saying, oh my God, this is very exciting. But also saying, well, yes, but what is this? And I think the challenge we have, or one of many challenges, is just to kind of work out how should we understand this? What level of a change is this? How do we conceptualize this? What kind of levels of generalization do we apply to it? And that's kind of a base case that says this is just another platform shift. It's just as big as the iPhone or cloud or SaaS or machine learning, which is a pretty small thing, pretty big thing to say it's only that. But that's sort of a base case. And then you have something alluded to in that quote from Bill Gates, which I'll come back to, that this is actually more like a once every 30 or 40 year shift. This is more like a fundamental change in the nature of software and how generalized software can be. And of course, you have the still probably minority view that says this is going to take us all the way to AGI and from there to all sorts of science fiction scenarios and something that might try and kill us. But it's still a sort of a thing that serious people talk seriously about as well. And I kind of want to dig into each of these and think about not what are the answers, but how would we, what questions would we ask? How would we think about what we might think? And so if we talk first about platform shifts, I think most people here will be familiar with the concept of a platform shift that every 10 or 15 years, the tech industry changes what it builds on. And so we went from mainframes to the PCs to web to smartphones. Smartphones happened a bit over 15 years ago. And now we wonder what the next platform might be and that's now probably going to be generative AI. And so that's what every new company gets built on. And new platform shifts tend to go through three stages. There's a stage at the beginning where people say what is this useful for? If it's a consumer service, people say it's a toy. It's a stupid toy for rich people. If it's enterprise, people will say that won't scale. Then there's a period when it's exciting and you want to get a job in it and then there's a period when it becomes boring. And that's where smartphones are now. The new iPhone is amazing, but it's not as exciting as the first iPhone. It's become kind of a boring, mature industry. It's gone through that cycle. I think you could say that machine learning that we got excited about 10 years ago is now kind of most of the way through that cycle. It's still a bit exciting, but it's well on the way to being kind of a mature, well understood technology that just goes into the deployment phase. And then generative machine learning is kind of at the beginning of that cycle. We are wondering how it's useful, but we're also very excited about it. And I think to understand how that cycle progresses, it's really useful to remember how we were talking about machine learning in 2013 when this really started working. So some people in the room will know that machine learning is really something that starts in the 1980s. In fact, it was kind of a dumb idea from the 80s that had never worked, kind of like VR. And then in 2013 with ImageNet, this starts working and I would show demos like this to big companies and they would say, well done. That's very clever. We're happy for you. Like why is this useful? Why do we care? What would we do with this? You can recognize a dog. And the challenge is to work out the right level of abstraction. So it wasn't just that this was image recognition or even that it solved all sorts of other problems like natural language processing or translation or speech recognition. It was to think that this is pattern recognition, that that was the right level of abstraction to understand this. And we spent the last 10 years really working out, well, what could you do with pattern recognition? What problems might turn into pattern recognition? How could you use that to solve some problem inside a big company or a big industry that maybe you hadn't realized could be solved with pattern recognition? And that's really what all kind of machine learning companies are doing now. They're finding some place that you can apply pattern recognition or find patterns in order to solve that problem. Now, as you go through this process, of course it gets mature and it starts becoming boring. And so we invoke this classic quote from the early 70s, AI is whatever hasn't been done yet. AI is anything that doesn't work. Because once it's been done, people say, well, that's just software. Larry here was really talking about databases. In the 60s and 70s, databases were AI. Now you look at a database and you think, well, that's just software. That's not AI. We've kind of got to the same point now with image recognition or translation or natural language processing. That's not AI anymore. That's just software. We're now kind of going through that process again with generative machine learning. And so again, I can do the cool demo. I can make cat pictures instead of recognizing cat pictures. And again, you show this to the big company and they say, OK, that seems cool, but what do we do with this? And you can do the cool demo. You can make the song about space Karen, Tesla cars, space rockets. It rhymes. It scans. It's a song. OK, why is this useful? Well, what happens if you're going on a holiday next week and you want to know what to do? And instead of opening 30 tabs in Chrome, you can just ask an LLM to give you the answer. Well, now this starts getting a little bit more interesting. What happens if you can actually use this to do things in fundamentally new ways and change maybe how something might work? So if we think about what we were doing back in 2013 with machine learning, you're solving an AI question. How is it that you would tell a computer to recognize a cat? And the answer is not you actually tell it how to recognize a cat. The answer is that you give it a million pictures of cats. And what we're doing now with LLMs is saying, well, how would you tell a computer how to reason or how to understand, which is obviously a science problem back since the 1960s or even the 1950s? And OpenAI's suggestion is just give the thing the entire output of human intelligence and let it work it out. And so we get some sort of reasoning or some sort of understanding engine. Maybe. Well, maybe it's important because you have to be careful with what you're looking at. So imagine for some strange reason you wanted to know an awful lot about Benedict Evans, you could go to chat GPT and ask. And it would tell you that I'm hugely important and influential, which is obviously true. And then it would sort of say what university and some of my jobs that I've had and so on. This is all roughly right. If I hit reload, however, you get a different university, different degree, different jobs. Hit reload again. Oops, no, back to the same university. Wrong birth date. Apparently I worked for The Guardian, which I won't hold it against it. And I worked for Atlas Ventures. And I think I applied for a job at Atlas Ventures, but I definitely didn't work there. They turned me down. And so people look at things like this and say, it's lying, it's making things up, it's bullshitting. This is the overconfident student who answers the question when they don't know the answer. It's a bullshitting undergraduate. I'm not sure that's quite right. I think what this is doing is better way to think about this is to say this is matching a patent. After all, it never says I went to MIT or the Royal College of Art or the Sorbonne. It always has the right kind of degree and the right kind of job, but it's not answering the question. It's matching a patent. Now, this can be very misleading. I think most of us have probably played with this and kind of discovered this experience. If you're not a Benedict Evans superfan, you would not know that this was wrong. And it would look great. It would look very convincing. This is a fascinating study from Deloitte this summer. People that look at the column on the left, people who have played with chat GPT are more likely to think that it's always accurate. Most people in this room, I think, would know, no, it's not always accurate. That's not what it's doing, but it always looks accurate. It's always very persuasive. And so we're kind of trying to puzzle out, well, what are the right ways to think about what this stuff is and how it works? It's not a database. It's not predictable. It doesn't actually understand in the sense that we say understand. And it tends to be bad at things that computers are good at, that we expect computers to be good at. But on the other hand, it's good at things that computers tend to be bad at. And so how is it that we think about what that's for and how that's useful? Should we call this pattern creation? That's sort of an engineering description, but that doesn't seem like a sufficient answer. Is this synthesis, summary, explanation going a bit further? Is this some sort of a reasoning engine? And whatever the answer to that is, again, you have this question, what is it that you can do with this? The way I always like to describe the last wave of machine learning was that it gave you infinite interns. So you've got a call center, you want to listen to every call coming into the call center and tell me if the customer's angry or the service agent is rude. You want to look at every single X or A. You want to check every single credit card transaction and see if it looks weird. You could get a 15-year-old to do that. In fact, you could probably get a dog to do that. But you don't have enough 15-year-olds or enough dogs. Well, AI let you automate that. That's basically what we've been doing for the last 15 years. And it seems like one useful way of thinking about, certainly, what a lot of people are doing with LLMs at the moment, is, again, giving you a lot of interns. So it's interesting to look at kind of the two things that seem to get a lot of traction this summer with chat GPT. On the one hand, things where they're very logical and there's a very clear right and wrong answer and it's very easy to see when it's wrong. And so that's getting it to suggest code, write SQL queries, do data mapping, sort of automation and suggestion and recommendation in things where there's a lot of grunt work but it's easy to see what a right or wrong answer is. The other side is things where there is no right or wrong answer. There's just the better and worse answers. So you want 500 ideas for a slogan. You want 150 ideas for survey questions. You could get an intern to write you 500 slogans and then you'd pick the 50 that were good. You can use chat GPT to write you 500 survey questions and you pick the 20 that are good. And that's still a lot more useful than doing it yourself. Again, what would you do if you had a million interns in your office and you could ask them to do stuff? Now, part of the pattern here that we're going through this year is that this is getting deployed at great speed almost everywhere. A friend of mine says basically every text box on the internet is gonna get an LLM. So here we have LinkedIn and they're solving the critical problem that LinkedIn has a shortage of generic self-promotion and so they're going to use chat GPT to help people create more of that. Slightly more interesting, this is Amazon using generative machine learning to help you work on your product shots. Again, this is basically an intern. You give the intern the picture of the toaster and you say, give me 20 different backgrounds, give me 20 different kitchens and load it up into Photoshop. And so here we have automated interns. And this is kind of following a pattern that you get with platform shifts. In fact, you probably could have asked chat GPT how do platform shifts work and you would get this kind of stuff. And so what always happens is that the incumbents look at every new thing and they say, let's make this a feature. Let's include this in the product we already have. Then a little bit further on, you have startups using the new thing to unbundle the incumbents to peel something out of Salesforce or Oracle or IBM or Microsoft or Adobe. And then of course, the really interesting startups are the ones that actually change the nature of the markets, come up with some way of doing something that's native to the new thing. This is of course a classic quote from Jim Barksdale from 25 years ago. There's two ways you can make money. You can bundle or you can unbundle. That's what happens with platform shifts. It's kind of interesting to look at Adobe here, trying to do both. So on the one hand, making the new thing a feature, integrating generative machine learning into Photoshop. On the other hand, unbundling, creating a new standalone product in Firefly as a way of trying to unbundle their own business. But if you kind of think a little bit further forward about how this stuff tends to evolve, I always love this image. This is a Jack Lemon movie from 1960 called The Apartment. Jack Lemon in the middle is a clerk in an insurance company and he's got a typewriter and an adding machine. And everybody in this building is a cell in a spreadsheet. The whole building is an Excel file. Once a week, someone on the top floor presses F9 and the whole building kind of recalculates from top to bottom and generates new insurance prices. 1965, they bought a mainframe and they automated that and all those jobs got automated into something else. And to begin with what they did was they just automated what they were doing with Jack Lemon and all of the people in that building. But then over time, we kind of changed how the company works. When you have a new tool to begin with, you make it fit what you're already doing. You force it to fit what you're already doing. And then over time, you change the way that you work in order to fit the new tool. We don't run insurance companies like this or financial services companies like this today, but with a computer. We change what they work, how they work. And you can kind of generalize this to think about any kind of platform shift that you can look at SQL and say, well, that made it easier to build an arbitrary data query. But yes, but what did that mean? Well, that got you SAP in just-in-time supply chains. The App Store wasn't just a slightly better way of putting Java games onto a phone. It enabled a whole wave of new applications. And so we have this sort of question now. Here we have this new platform shift, this shift towards generative AI to begin with. What we've seen this year are the equivalent of making a Tetris clone for the iPhone in 2008. Over time, what kind of applications do we build that are native to this new thing? And so that's a sort of a very straightforward discussion of what a platform shift looks like, how this is likely to go. But you could also say, well, maybe this is more than a platform shift. And so let's go back to this Bill Gates quote. He doesn't mention mobile here. He doesn't mention the web or SaaS or open source or cloud. This is the GUI and chat GPT. It's a much kind of longer time frame sort of innovation. And what he's suggesting here is that you've got a step change in generalization. That with a command line, you had to learn the commands and type them in. And so a very small number of people could do that. And it was a lot of work to do anything even fairly simple. And what a GUI meant was that now you could just see the commands and click on them. You could see your choices and click on them. And so this was a huge change in who could use software and in how much software there could be, how many different problems could be solved with software. But someone still had to make the GUI that you're clicking on. Someone had to make that individual piece of software that you're using to solve your problem. One piece of software at a time. Potentially, with an LLM, with generative AI, you could just go and tell the computer what you want so you don't need someone to have created a separate tool for every one of the 500, 1,000 or 5,000 different tasks. So you can have far less software doing far more. And so you can automate massively more tasks. You can have massively more problems being solved but actually with fewer tools, maybe. Now, some people might recognize this reference. In the last couple of months, we've had the return of this word agent, which I remember from the 1990s as like a vague hand-wavy term for like a general-purpose AI thing that would do anything. So instead of or a formatting your document, you can just write the whole application. You have the whole high school application or university application. The enterprise version of this, you can just optimize the entire thing. You can build the entire thing for you. And again, this is a great dream. It's a very kind of utopian dream that you have one piece of software that can do anything. But I come back to that Jim Barnsdale quote again, bundling and unbundling. And I look at chatGPT and I ask myself, is this a product or is this a technology demo? Can I give this to the accounts payable department of a regional cement manufacturer and say, hey, now you can solve all of your invoicing problems with this? Or do you need to build some other stuff around that? And what I would think of when I look at this screenshot is I think of this. You can give people Excel, but what do you do with it? How do you know what you would do when you open that? You have this blank canvas and you've got this grid and it could be anything, but if it could be anything, you don't really know what to ask. This, of course, is why Microsoft produced this screen. They're giving you suggestions and ideas and tooling around that. But there's an old joke that every Unix function became a company and I think every one of these templates also became a company. Those all get unbundled out of Excel and they get turned into standalone companies. And I think you could clearly kind of suggest that all of those suggestions you see on that chat GBT window, again, become startups, become new companies, get unbundled, get broken out into something else. Now, there's a sort of engineering question within this which is, am I just describing thin wrappers? Are we going to have a small number of very large, very capital intensive, very expensive, very powerful models and everything else is just kind of a thin layer on top of that? Or will this look more like spreadsheets or databases or indeed the last wave of machine learning where today if you were to say how many machine learning models are in the world, that would be like asking how many Excel files there are. It's just kind of a meaningless question like millions and who cares? We don't really know how that will work but out of that kind of flows the question of if we do have lots and lots and lots of different products, kind of how deep are they and how much are they dependent on a few giant computers, a few kind of VELT computers, you might say. But either way, when I look at this kind of idea that you'll have this single general purpose layer of compute, I'm reminded of a quote that something that a consultant said to me a long time ago, he said half of his jobs were moving people from Excel to a database and the other half were moving people from a database to Excel. And I look at that and I'm also reminded of NoCode and this idea that everyone will have this universal substrate where anyone can build their applications. And so instead of Excel, it becomes Notion. You'll move from Notion to a database and back again. It's kind of interesting to think if that's where chatGPT goes as well, that this is a sort of general purpose substrate but then we have many, many specific vertical applications that unbundle that, that you will outgrow chatGPT just as you outgrow Excel or you outgrow Notion. Maybe we don't know. The answer to most AI questions at the moment is we don't know. Let's find out in a year. And one of the big kind of don't know questions is whether there's something else that's going to happen inside these computers that's actually going to change things at a kind of a more fundamental level. And that I think is the AGI question. Some of you may know I basically write for a living and I've never really written anything about AGI and the reason I've never written anything about AGI is because I'm neither a computer scientist nor a theologian. The problem is of course that neither the computer scientist nor the theologians know the answers anyway. So given that nobody knows I feel confident in expressing my own ignorance with equal confidence. And I think it kind of a good place to start in talking about AGI is to come back to Larry Tess's quote again. What is it that we mean? What are we talking about when we say artificial intelligence? And there's this kind of split in that when we look at a database or a calculator, calculator can do superhuman mathematics. If database has superhuman memory, is this a superintelligence? No, it's just a machine. You know, we don't look at a washing machine and say, oh my God, this thing's going to take over the world, it's just a machine. It doesn't know. On the other hand, people, dogs, octopuses, horses, even cats, have general intelligence, although obviously not the same kind. And so at least theoretically, it should be possible to build something like that. And when ChatGBT came out and then as people have played with it over the course of the year, some people have got very excited and thought, well, OK, maybe this is now putting us on a path to go there. As far as we know anything, that does seem to be part of what happened at OpenAI last week, that the people who think we should worry a lot tried to shut down the people who thought we should worry slightly less and lost. But of course the problem here is that we don't have that today. This is kind of my favorite example of the challenge. I showed text earlier, but it's actually easier to see in images. I asked for a journey for a fantasy 1960s French sports car. This year looks French. It looks 1960s. It's a fucking cool sports car. It's got kind of a Citroen vibe. Certainly the kind of the coloring looks kind of French. It also has two steering wheels and no door. Now this would still count as a major leap forward in product design and manufacturing quality if you're working at Tesla. But like the rest of the car industry, this probably isn't good enough. But why is that? Well, it doesn't know what steering wheels are. It doesn't know what cars are. It just knows that shapes like that tend to have a shape like that in roughly that place. And so the kind of big philosophical debate within the science is, how do you solve that? Does that get solved? And so there are people who say, well, as the models get bigger, that will happen. This is the idea of emergent capability. Make the model big enough, that will come. There are other people who say, well, if the model is big enough, it doesn't matter. If it good enough, it doesn't matter. Because if it always produces the right answer, it doesn't matter if it doesn't know why. And then of course, you've got a lot of other people who say, no, actually there's some unknown other breakthrough or breakthroughs needed. And you could spend a week of your life watching YouTube videos of machine learning scientists arguing about this. And all you'd really conclude is that they don't know. You could also say that maybe people don't have general intelligence either. And this is what explains things like cognitive biases. And this might be on one level, this might be true. On the other hand, it seems like the conversation best had after a bunch of drinks or something stronger in a bar at 2 o'clock in the morning. Hey, man, have you ever thought that maybe people aren't intelligent either? Well, yeah, maybe. But meanwhile, we're kind of puzzled by what this is. What are we measuring? When we look at chat GPT, are we seeing intelligence? Are we seeing information retrieval? Are we seeing something that's very good at looking like people? When you give an exam, well an exam is testing whether people are good at something that people are bad at. People are bad at mathematics and information retrieval. And the exam is testing how good you are at something you're bad at. But to apply that to a computer seems slightly problematic because now you're testing something else. And I think the real conceptual problem in talking about AI that comes through in all AI conversations is that we don't really know. We don't have a theoretical model of what our intelligence is, nor what AGI would be, nor what LLMs have, nor how far they are. So we can't really draw a chart and tell you whether this is gonna happen or not. And so all conversations about this really just come down to how you think about risk, how you think about something that you can't know. Meanwhile, there's everything else. And as I look at how technology works, it often seems to me that we've got this kind of sequence of ideas. We have the tech industry obsessed with what will happen in 2030, AI. Most actual software companies, most companies in this room are basically deploying ideas from 2010, SaaS, cloud, automation, workflow, software, collaboration, and then the rest of the economy is being overturned by ideas from 2000. Ideas like maybe people will buy things on the internet. All of those, of course, come with problems. They come with regulatory questions, policy questions, concerns of every kind. But if you think about the kind of, the broader impact of this, we went through the pandemic, we had a surge in e-commerce, we went back to the trend line, but we're back at the trend line at 20 or 30%. We are now using computers that used to be small, that used to be relatively rare. There used to be relatively few people online. Now five people, five billion people have a smartphone, five billion people are connected to the internet. And I think the best way to express that is that this chart of online dating, that over half of all the new relationships in the USA start online. And so this stuff has gone from being kind of interesting and exciting, but not part of most people's lives, to a basic part of how every company gets built. And so now we have a Chinese company that's the world's largest fast fashion retailer. They're selling $30 billion a product last year. We have Ikea asking themselves, why are we still on the edge of town? Should we have a giant store in the middle of town? And we look at those questions and think, are these software questions or no, what happens to TV? I don't know, that's a TV question. What happens to the car industry? This is kind of a car question. I think the best way to express really what's going on now is to think about, in fact, about the car industry. That the first 50 years of the car industry was what's a car, what's a car company? The second 50 years was what happens when everyone has a car? And the same thing now happens with software. The last 50 years was what's a computer? What's a computer company? What's software? The next 50 years is what happens when everyone is online? What happens when everyone uses software as part of everything that they do? And with that, I will say thank you.