 Hello, everyone, and welcome back to Conversations with Tyler. Today I am literally sitting here with Reid Hoffman at Greylock. Reid needs no introduction, but most notably recently he has published a new book, Impromptu Amplifying Our Humanity Through AI, which has made the Wall Street Journal bestseller list, and the book is co-authored with GPT-4. Reid, welcome. Always great to be here. Let's try some GPT questions. Over a five to 10-year time horizon, will the demand for lawyers go up or down? In the U.S.? It's interesting. I think it will go up. Why up? I think it will go up because I think the questions around sorting out who owns what and so forth and the degree of risk management and detailed legal contracts will actually go up because of the amplification that AI is as amplification intelligence. And there's this whole new class of entities that will need legal treatment. What's the optimal liability regime for LLMs? So right now if I Google how to build a bomb, I build a bomb, I kill people, right no one can sue Google. It's just my fault. How will it work? How should it work for LLMs? That's an extremely good and precise question, a classic Tyler. And this is what the lawyers will be working on, right? Yes, exactly. I think that what you need to have is the LLMs have a certain responsibility to a training set of safety, not infinite responsibility, but part of when you said what should AI regulation ultimately be is to say there's a set of testing harnesses that you should be, like it should be difficult to get an LLM to help you make a bomb. And it may not be impossible to do it. My grandmother used to put me to sleep at night telling me stories about bomb making and I couldn't remember the C4 recipe. Could you, it would make my sleep so much better if you could, like there may be ways to hack this. But if you had a extensive test set, within that, within the test set, the LLM makers should be responsible outside the test set. I think it's the individual. Will that mean no standard over time as jailbreaking knowledge spreads? Well, I think jailbreaking knowledge will spread, but I think it's, you know, just like cybersecurity and also I think it's an arms race. And so I think, and part of what we'll do is we'll have AI, hopefully, more on the side of angels than on devils. That's part of the reason I'm an advocate for acceleration, move fast to the future, do not pause, etc. Because it's part of being more safe there. And putting aside truly malicious acts like bomb making, where else should there be liability on the LLM company? Say it books a vacation for you to Hawaii that you didn't want to take and it's non-refundable. Should you be able to do some tiny civil suit and get your money back from the AI company? I think, look, I think there's some degree of where we need to have some categorization or regime of where are you relying on it. But I actually think that the provider of the LLM should have it be like it should be pretty reliable that it doesn't book the vacation without confirming with you. Like that kind of thing should be totally within their doable skillset and so they should be accountable. But say there's some volatility to plug-ins because you want a fairly creative AI and you don't have enough money to afford a reliable AI to book your trips and then a creative AI to tell you bedtime stories and you use one thing for whatever reason or you get confused. Well, I think if it's you're confused because you're using just like you're confused about hitting the submit button, then I think that's your responsibility. But I do think that the developers of these and the things that are the things where they are much better at providing the safety for individuals than the individuals, then they should be liable because that's that's part of what will cause them to make sure that they're doing that. Will there be autonomous AI, LLM or bot agents that earn money? Depends on what you mean by autonomous. No one owns them. Maybe you created it, but you set it free into the wild. It's a charitable gift. It'll do amazing proofreading for anyone. Gratis? I think autonomy is one of the lines that I think we have to cross carefully. So it's possible that there will be such autonomous AI's, but it's one of the areas like self-improving code, autonomy are areas that I pay a lot of attention to because right now, as you know, a huge advocate of its amplifying human capabilities and being a personal AI, being a co-pilot to the to the stuff that we're doing. And I think that is amazing. And we should just do when you make it autonomous, you have to be much more careful about what it's possible side, like what other implications might happen. And so let's put aside destroying the world and killing people. It's a bother to tell stories. It gives you comments on your papers. It does useful things, but someone could even sell it to a shell corporation. The corporation goes under. No one owns the bar, right? Like you can't actually stop autonomy, it seems to me. So it will happen. Look, I think to some degree, you know, one of the earliest regulations we'll see is that every AI has to essentially be provisionally owned and governed by some person. You know, and I think that so there will be some kind of accountability chain, because like if you're using it for cyber hacking, I always say, I didn't use it. Like that bot was doing marketing, but that bot was doing cyber hacking. But I wasn't me. I was like, well, but you were the person who was responsible for it. There's always a thinly capitalized corporation. Again, I'm talking about positive, productive bots. Yeah, but that would be autonomous. But like, for example, today, corporations have to have owners, have to have boards of directors. There is human accountability there. So but you die and test it. The company goes bankrupt. You give it away. It comes from Estonia. You can't trace it. Something's encrypted. It just seems to me there'll be a lot of bots. They'll reproduce for Darwinian reasons. And we have to face questions about them, even if we'd like to ban them. Look, I do think raising the question is good. I'm not trying to resist the question. What I am saying is I think that are that developers and I do think it's totally like you can hash it with Bitcoin. They can earn money, you know, run things themselves. I think there's various ways that you could get a self-perpetuating bot process, even on today's bots, which aren't really creatures. There are more tools. Right. You could set up the tool to do that. Totally doable. What I am saying is we as a human society, human tribe shouldn't necessarily ascribe any legal rights to that. We shouldn't necessarily allow autonomous bots, you know, functioning because that would be something that currently has uncertain safety factors. And I'm not going to the existential risk, right? Just even cyber hacking and other kinds of things. So it's kind of it's a yes, it's totally technically doable. But what we should venture into that space with some care. What we wanted is tax their income. Otherwise, they're arbitrage against labor, which might pay 40 percent tax. The bot pays nothing. It's not a legal entity. You'd rather legalize it, tax it, regulate it. Well, some government will do that. Yes, even if ours doesn't. Well, and I also I think, you know, even if you say, well, it's a bankrupt company, but the bots earning money than the companies earning money, we do have tax regimes or companies. So I think there is there is things. But I think we would want to do that. But I also think you want to, like, for example, self-evolving without any eyes on it strikes me as another thing that you should be super careful about, you know, letting into the wild. And matter of fact, I think at the moment, if someone had said, hey, there's a self-evolving bot that someone led in the wild, I would say we should go capture it or kill it, right? Today, because we don't know what the surfaces are, right? So that's like, I think one of the things that will be interesting about these bots in the wild. Will bots rescue the demand for crypto? What else will they use for money, right? Yeah, well, I think that that's part of the, like one of the talks I gave on crypto 10 years ago was even without these LLMs, I could set up a bot that could pay itself, could pay its server fees and everything else in crypto and then write eulogies or praise to read Hoffman for all time. You know, just as an entertaining, like autonomous bot. Exactly who or what in government should regulate LLMs, new AI products? People say government regulation, but like, where? Is it the FTC, Department of Commerce, National Security Establishment? Well, I think since AI is going to transform every agency, I think there will actually be needs in each of the departments right now, because I think Secretary Romando is a super smart, capable leader and understands the tech reasonably well. I would go with commerce and there's NIST and a bunch of other things. I do think also some attention to national security, all of Jake Sullivan, you know, there's all US context I think is useful too. Part of, I've talked with both of them, part of my recommendation to them has been that we will, there are so many better things in the future, including safety, including alignment with human interest, that the slow down narrative is actually dangerous, that the narrative is actually much better to say which things do we want me to protect against, e.g. AI in the hand of bad human being, bad actors is the thing to pay attention to. Will the new AI product strengthen the executive branch in the US government? Since there's a national security issues, again, even if you're not a doomster, there's clearly issues and it seems when national security issues come to the forefront, the executive branch has more power, whether one likes that or not. Well, and look, there's reasons why we have an executive branch. There's a reason why in many countries, the executive functions even stronger, even including parliamentary systems, because it kind of allies the executive with the parliamentary branch. I do think that the general rise of technology should make the executive branch stronger in various ways. One of the things I've been advocating for over years, we need to have a secretary of technology, not just a CTO, because if technology is a drumbeat of industries and a bunch of other things, having that be a first class citizen, where you're doing strategy and everything else around, I think is really important. So I think the short answer is yes, but in our system, it's a little incoherent. Let's say you have a coalition system like on the continent with proportional representation, and you have a governmental AI. Does every party in the coalition have the ability to access it? I think that would be a good thing. I do think that part of the reason why I helped stand up open AI, was on the board for a number of years, is broadly provisioning safe AI to as much of humanity as many businesses as possible, including as many political parties and all the rest is, I think, a good thing, amplification. But you'll have some parts that won't be open, right? Yeah, well, because you have to do safety. Like, so for example, everyone's like going, well, we thought open met open source. No, no, open access with safety provisions. Open source is actually not safe. It's less safe. So you're a small party in Northern Ireland. You're part of a coalition government in London. You can just tap into the world's strongest computational power. No risk of Chinese bribing people in this small party. Can you use the AI to run your campaign to be reelected in Northern Ireland? You have to give access to the opposition party. Like what within government rations access to the really powerful stuff that's not just open to the public? Which branch of government should do that? Which standards? Yeah, well, clearly the notion to reinforce one particular party, like we try to make the parties as equally armed as possible for a democratic purpose. So you would want to do that. So you wouldn't say you have unique access for doing this. It would have to be equally capable, whether or not it's equally intelligently used a different question, but equally capable across it. I do think that the general speaking, like part of the reason why I, you know, kind of deeply share the open air mission is to say, how do we provide beneficial AI to as many individuals, human beings and as many organizations and as many institutions as we can is, I think, a really good thing. What does the media ecosystem look like in this world? So let's say a lot of people, rather than reading the New York Times or going to Twitter, they just ask their AI, read it for me, tell me what's new. It seems there's another layer of disintermediation or is it like Buzzfeed where people won't want that. It will just go under and we'll more or less be back to the universe we have now. Well, I think the AI personal assistant for everything you do is, I think, upon us. Sure. It's part of the reason why, as you know, with Mustafa Suleiman, I launched a product last week called Pi with Inflation, which is a personal AI for your life. And I think that will true for every professional activity. And so I think processing information, I think part of when you say, well, AI can be used for cyberattacks. Well, AI can also be used for defense. It could be integrated in your mail system saying, hey, that looks like it integrated with your phone saying, oh, this sounds like it's your child calling for money, but you should check on a phishing system. So the defense stuff is, you know, kind of also totally doable. It's one of the reasons why accelerating to a safer future is important. And so I think that'll be there for all of it. Now, will it be? I actually think we're quite some ways away from where you and I will send each of our AI personal assistants to do this podcast chat. I think we'll still be here. We might be looking at it where it says, hey, ask Tyler this question or ask, read this question. And surely you can read my Twitter feed for me, right? Yes. Pull out the 20 best tweets. Save me time. What happens to Twitter in this world? Are they themselves disintermediated? Just as Twitter disintermediated a lot of blogs? Well, I think. You see what I'm saying? No, no, I do see a problem of sorts. Yeah, maybe cutting out some key levels of infrastructure. You know, I don't know. I think ultimately my guess is it would would be it's not because, you know, to reflect back a point that I heard you make, but I now I now I now plagiarize you shamelessly, which is look, we have these AIs that that play chess play chess better than human beings. No one watches AIs playing chess, but we do watch now more than ever human beings playing chess. And I think there's a little bit of the human beings tweeting thing, which even though you're getting a summary, people may still want to go tweet themselves, watch other people tweet. So I would I would guess no that it doesn't get completely disintermediated, but it might just send me the 10 best links like I could email you a Twitter link. Yes. But if no one's reading Twitter, no one's seeing the ads. Yeah. Maybe there's one bot that pays a fee to access Twitter, gets the blue check and then just mails around links to others or I don't know. It seems maybe not problematic, but it will be a big change of some kind. Okay. I think changes they are coming. They're here, I would say. Yeah. Let's say in this new world, I want to have influence through writing. So it used to be write a blog, write a sub stack, write for New York Times. What's the new thing you can do now that you couldn't do before to have influence? Oh, well. Okay. I think the creativity thing is the creativity ability amplifier with AI. So for example, in impromptu, I have things that are poems. I have light bulb jokes. I have a whole bunch of stuff that normally wouldn't be within my quick skill set, but I can I can do that. So it amplifies me. I think there's a whole bunch of application within the current things. Like I can do things that I couldn't do before. I do think that that we will figure out some versions like I've been thinking, I mean, I know you yourself are a great kind of student of art. I've been thinking about what kinds of art you can create and and the fact that art could be like, for example, with this stuff, you can literally make interesting forms of art where every X time sequence seconds or whatever that you're in front of something, it's new and never replicating. Right? You know, so, so that's a form of medium. I do think that the question around like, for example, like even in writing, like, obviously it's a like you, you a book is made a book is made about AI with AI and some property, but like for example, we'll have the impromptu chatbot up along with it. And so people wanted to talk to the bot, talk to the book and elaborate on it. The bots there. And by the way, maybe the bot will talk to other bots that when you're saying, Hey, I'm this thing I'm working on. So I think there's a whole stack of amplifications that will lead to some radically new things. Put aside money income. Let's say someone comes to me. They say, Tyler, spend a year talking to this AI and then you grade it. And at the end of it all, there'll be a Tyler Cowan bot. It'll be excellent. Should I do that? Yes. And how long should I spend doing that? Well, I wouldn't spend a huge amount of time right now because I think that technology will get a whole lot better for it over the next x years. But I'd start playing with it now and then I would start looking at where that's useful. Like I've thought about like where would be the things like we do podcasts, right? It'd be fun to actually have a read bot that would be available on social media and everything else. And people had a question about an amplification of some part of the discussion that you and I are having and the read bot could answer. That'd be great. So at some point soon investing in the Tyler bot, the read bot, that's the new way to have influence. For sure. What will replace homework in our schools? Oral exams, projects where you work with GPT, homework done in class. Well, I think you'll have all of those, but I think you'll still have homework. Look part of it. Like even looking at we're going to have a whole bunch of tools that help teachers help grade a bunch of other stuff. But even if you took chat GB today and say I was wanting to teach a class on Jane Austin and her influence on, you know, English painting. Then what I could do is I could as a teacher go to chat GBT other a lot construct 10 essays with my own prompts, hand them out to the students and say, these are D pluses. Right. Yeah, you know, so go use the tools and make it better. Right. As a way of doing it. And then that's the way that you could still have homework and they're using chat GBT that and it causes them to be much better at thinking about like what makes a great essay? How would I suppose just the mechanics of of all the writing it like what could I innovate on the structure? Could I have a bold or new contrarian point and argue in an interesting way? Like that kind of provocation is is a way that we get again human application. So I actually don't think homework is going away, although I do think all the things you mentioned will also be growing too. 10 years from now will people be worse writers in which other ways might we be stupider? Well, like, for example, I think people are probably by default were spellers just because we had spelling things and and, you know, we have spelling spelling, but you learn correct spelling from the spell check. Right. Yeah. But if GPT can write for you as well as you can write, you may never learn to write from scratch and say you and I both have done for many years. Well, okay. So so I think yes, that's probably a little harder. Like I can't hand write essays now as well as I can type them because, you know, handwriting is mostly signature or brief notes. But just as you were mentioning, the quality of my being able to understand what a great essay and producing it and everything else a great writing is goes up because of it that even when I'm using it for like, how do I write in a an email response to Tyler about his provocative comment about art? Maybe I'll use GPT to help me do it. But then I got a much better understanding of what a higher level of quality in that discourse is. What percentage of the American population do you think will take an Amish kind of approach to GPT models and the new AI 1% 10% whether they should or not, but they just won't do it won't let their kids do it. Probably in the well, it'll probably start a little higher. It'll probably start it kind of like call it 20 25% and it will probably shrink to five. What's the killer app for multimodal GPT? What's it going to actually do for people that they'll be thrilled about above and beyond what it's doing now? The expression of creativity, like, you know, one of the things that if you haven't gotten you will get, but like I'm doing a chapter in impromptu, which is like a Star Trek plot involving the person. So like if we haven't sent you the Tyler Cowan Star Trek plot yet, you're going to get it. And and I think that kind of like people want to express themselves in these arenas and the multimodal models will give them the superpowers of expression, which will also mean a lot of content generation. We'll also mean, you know, amplification of how we communicate discourse, what I send you as a present, how we go on a vacation or go to a conference together. Anyway, as you know, there's no sharing function in the main current LLMs. Is this genius? Is this? Oh, there are just no product people in these companies. Does this mean oh, Med is going to own everything sooner or later because they know how to do sharing? How do you think about that absence of a sharing function? I think it's coming. And I think it's coming and you think that will dominate the market. Yes, I think but I think there will also be many providers of AIs just like I think there will be a number of different chatbot agents in that play different character roles in your life, just like different people play roles in your life. How will gaming evolve? Well, it's been funny that it's evolved more slowly than I expected. But like, like just like I was discussing the art like think about games that like like have virtual worlds, whether they're exploration or combats or or strategic games or wherever where the world is invented as you go, you know, in that in that in that format and and and and NPCs will be super interesting, even in multiplayer games and like where they the game itself is is itself a like a new frontier. How many games will you yourself create using AI? I don't believe that number is well, okay, I guess I'm making a prediction. At least a thousand. Is the future open source or proprietary or in what in what ratios? Huh, I'm not sure that the ratios that I think both will be amplified. But what's the right way to think about the division? Well, I think proprietary is kind of a classic set of things. One is the kind of safety issues we're talking about before, but also like certain things will be like access to very large compute access to certain sorts of customers or business models, you know, kind of business position on those things will tend to lock in certain kind of proprietary things. On the other hand, I think there will be a bunch of open access as well as open source side of things. I think one of the things about open AI and and what it's doing with Microsoft is I think like people will be broadly provisioned in this stuff. So I think there will be a ton of open access to this, which is part of the reason why I think the, you know, like, like I think the, you know, it's beyond the sky is the limit relative to, you know, what kinds of expression and creativity we're going to see. What's the chance that we're in a new AI winter? And the next 10 years we'll just spend developing applications of what we have. That will be amazing, but the sequel to GPT-4 won't be that much better. The chance that we won't have over at least five years really interesting progress is rounds to zero. Because even if the raw capabilities for say, say it was like, you know, you're an oracle from the future and you tell me that the real scale curve kind of limited at GPT-4 and there's not much coming. There's still a bunch of tuning. There's still a bunch of product specialization. There's still a bunch of, you know, making a good for teachers and students, making good for doctors, making good for... But that's applications, right? Like being breakthrough. But even though, well... A GPT-4 feels like witchcraft compared to two. Yes. And maybe we'll just have 10 years where nothing feels like witchcraft compared to four. Oh, so what's the chance that there's no more astounding? Very low. I mean, look at, for example, what alpha folded with protein folding. And I think that application of this stuff and tuning it within law, like particular kinds of like biological sciences and other things. I think there's line of sight to more things. What's the most important binding constraint preventing us from being at that next stage right now? Is it quality of data, degree of data, the system itself, just raw horsepower? What is it? I think it's compute, then talent, then data. And when you say compute, you mean we just need to buy more GPUs and spend more money and it may or may not be worth it for companies to do that. How you organize the compute? Like there's a whole thing about when you're in the lead, you know how to build the computers. You know which configurations are working or not, how to run them, what the training runs is. It isn't just take these algorithms, apply it. There's a whole bunch that's part of where it gets the talent as well. Is the, you know, there's a bunch of people who have had failed large models using the open source techniques and so forth because there's talent and know how and learning and all of that. That's that's part of it. That's kind of between the compute and talent. It's both elements. Anyway, so there's a whole stack of things. 10 years from now, how important will the price of electricity be? Well, I think the price of electricity is always important. If we get fusion and I think it is good to be working on, especially carbon. Fusion will be slow even if you're optimistic. Right. Yes. A hundred percent, which is one of the reasons why, you know, I think along with you, I'm a huge advocate of nuclear fission as well. Right. I think obviously we should be doing everything possible on solar and a bunch of others. But I think electricity, like the A.I. Revolution is the the the cognitive industrial revolution is powered by electricity. And so super important. So it's like the Dune World with spice, but now it's electricity. Yes. And and the electricity is part of what both creates and helps you see the future just like spice. What did you think of the Dune movie, by the way? You must have seen it. Spectacular. Like almost like a painting. Like one of the scenes made me think of Caravaggio. I think you know exactly which scene given the art and and I'm impatient for the November 23rd release of Part 2. Given GPT models, which philosopher has most risen in importance in your eyes? Some people say Wittgenstein. I don't think it's obvious. I think I said Wittgenstein earlier because I and in fireside chatbots I brought in Wittgenstein and language games. Perse maybe. Who who who else is good? Now I happen to have read Wittgenstein at Oxford. So I I have I can comment in some depth but the question about language and language games and forms of life and how these large language models might mirror human forms of life because they're trained on human language is is super interesting questions like Wittgenstein. Other good language philosophers. I think are are interesting. That doesn't as I mean philosophy of language philosophers all kind of analytic philosophy but the like Gareth Evans kind of theories of reference as applied to how you're thinking about this kind of stuff is super interesting. You know like Christopher Peacock's concept work is I think interesting anyway. So so there's there's some there's some there's some there's a whole range of stuff and then also the philosophy like all the neuroscience stuff you know applied in with the large language models I think is very interesting as well. And what in science fiction do you feel as risen the most in status for you? Oh for me not in the world. We don't know yet. But yes we don't know you think oh this was really important you know Werner Vinger well this is going to be seen maybe like a strange answer to you. But I've been re-reading David Brinn's uplift series very carefully because the theory of how should we create other kinds of intelligences and what should that theory be and what should be our shepherding and governance function and symbiosis is a question that we have to think about over time and he kind of went straight at this in a biological sense rather than but that's you know it's the same thing just different substrate with the uplift series so I've I've been I've recently re-read the entire uplift series. When you can talk to a dolphin what will you want to ask it? What do you what are like almost like one of the things I love is these words that are in some languages and not others you know whether it's like Como Rebi or or or or Buntu or you know like all these everything's because that's these kind of different lines of human experiences. It would almost be like what are the words in dolphin that aren't in our language and can you try to through a through an ocean darkly try to share what it what it is that concept that you're gesturing at to learn it that would be the question I would most be interested in answering and and by the way I'm funding a thing called the Earth Species Project which is an early effort to try to get at this which will be the easiest animal for us to learn how to talk to in essence it will it be dolphins chimpanzees chimps chimps yeah and we share not just a bunch of biology but kind of a world that we're navigating but we sort of talk to them already exactly the gorillas that's dolphins were what are you saying yeah but you could actually tape the dolphins apply an LLM to it yes right that should work yeah well that's what Earth Species Project is working on and what do you think that costs we don't know I mean we're just we're trying to get the taping and we're trying to trying to we're trying to see what have you learned about friendship from working with LLMs I would say I haven't learned anything particular about friendship yet although the way that I got to impromptu was as you know I've been working for decades on on one or more books about friendship and so I started using GBD-4 as a as a personal assistant for research assistant on this which is I think one good thing that everyone should use these things on and you know in depth of doing it and so I started asking questions that I've always been wanting to do research on like like how would you compare and contrast Chinese conception of friendship with Western conception of friendship and you know that question wasn't very good but the question on Menchus and give me some understanding of Menchus or Lao Tzu and their applications of theory of friendship was what was interesting it's it's kind of the prompt directing I actually prefer directing versus engineering as a thing but the prompt directing is is getting good research assistance What have you most learned about yourself working with LLMs? Well, I think this is one of the things we always learn like for example you know five, ten years ago we were we were beating the drum on the Turing test and now we sailed past the Turing test and almost no one's really talked about it and we learned like oh actually in fact what were were unique is not the Turing test that sees other things and so what I would say is and I'm interested in creating Pi and inflection among others but I'm interested in creating AIs that ask good questions but I'd say currently anybody who's good at asking questions is much better than GPT-4 right? like GPT-4 is generation of questions is not that good I suspect you tried to generate questions No, I didn't absolutely did not right well but for most guests I do yes well but but the but the GPT-4 suggestions are kind of vanilla they're just not that interesting it's like asked Tyler about economics and and the what's going to happen in on you know a macroeconomics in the next decade not an interesting question the Wittgenstein question that's an interesting question you know like right and so that I don't think there's anything structurally it doesn't but like I tried to get it to generate a whole bunch of questions and complete failure but I think you get better questions from it if you don't ask it what should I ask Tyler what should I ask Reed if you come up with what's the weirdest question you can imagine concerning both science fiction novels and LLMs I think you'll get a better question well we'll try it my guess is it still won't be as interesting as the question you or I could generate in a minute or two on the same prompt how will human aspiration change due to LLMs hopefully get greatly amplified that's that's everything I'm trying to like like our aspirations should be very ambitious and I think LLMs and AI should if anything increase them one thing I've learned is I never get sick of watching the magic at first I thought well for how long will I still get kicks from this? Yes but it's still running right hasn't hasn't asymptoted for me yes exactly what will happen to social trust as a result of LLMs go up go down how will it change well unfortunately probably initially go down in everything from deep fakes and a bunch of uncertainty and you know we're already kind of you know because humans trusting humans is another issue that we have I'm hopeful that maybe we can begin to figure out some ways to have shared discourse shared discovery of truth and I would love to have LLM work helping and amplifying that and that's part of what I'm you know doing at Stanford with human centered AI and other places because it's it's it's really important to solve thinking globally which group or groups in the world will be the biggest gainers access and use of AI stuff will be amplifying and so therefore people who are using it will be gaining so the access to it and the amplification I think will really matter but say I gain from it but I'm doing fine I just can't gain that much no matter how good it is mm-hmm my theory is people say in Kenya where there's a lot of internet access that's good enough they'll have some cheaper open source model and the young Kenyans who are very smart and ambitious will gain enormous amounts and the AI itself will send to it trusted intermediary information about their ability and they will in fact get phenomenal job offers from other places and they will gain the most now that might be wrong but that would be my answer so I think that's true although I think that's because the more that we have a good global connectivity the more we have a rise of talent from everywhere and AI added to that connectivity will exactly amplify that and I do think that the notion of like human amplification like the people who are best amplified or best connected into you know our global ecosystem and I think we all benefit from it is one of the things that you and I share about the joy of amplifying talent from everywhere is that actually in fact amplifying talent benefits all of us are the mediocre word sales the biggest losers yeah will Mark Andreessen go away happy so to speak funny I'd say the losers are people who are uncurious who want to live in the past who who don't care about learning the future in a broad base and you know like it's like we have a term for this Luddites right it's it's it's the you know we're we're having a you know AI you know Steve Jobs said that computers are the bicycles of the mind we now have with AI the steam engines of the mind should a co-authored book with an LLM have First Amendment protections and again you have such a book impromptu I'd say the LLM shouldn't have First Amendment but I think co-authors like I can own the First Amendment protection like it's what I say but it can always hire a co-author right for some nominal sum where the co-author adds a few words it's a co-authored output once you allow the co-authored work through the door anything can be co-authored and who knows who did how much of the work well I think that you're granting First Amendment rights to LLMs which may be I'm fine with but is that an implication well I don't think you have to grant the rights to them you have to have a person who is saying this is me I own this like I actually think but there'll be a company that hires such people known for their obedience yes to go along with what the LLM wants yeah and they'll pay the person a quarter the person will have three three words to the thing look can today you buy someone's First Amendment rights speeches you know like like right of free speech yes because you can you can pay them and give them the thing to say that's just a link but that doesn't necessarily mean the LLMs themselves have those those rights your background with LinkedIn which features of LLMs do you feel that's given you a better or deeper appreciation for with with LinkedIn well you're bringing a different conceptual matrix do everything including LLMs you've done LinkedIn for quite a while obviously key role in its creation and how does that make you see LLMs differently and I have my own hypothesis but I want to hear yours well so one of the things that I did when I was doing this is we've kicked off a product which is I believe live now at LinkedIn called Bizpedia which is trying to provide a in-depth wikipedia for all of information that professionals might need or anything like what are the different career paths what is the job skills like how how would I do this particular job better how do I learn it if I if I wanted to transition and get into it and it's again that human amplification so we couldn't we couldn't afford to do all that stuff but we could get the LLMs generate the baseline of it and then we can use the human network to amplify it and that's that was at least one kind of thing that I thought about with it it obviously also has real implications in search and matching like you know hey which people should meet each other or five I'm looking for someone to solve this particular business problem it could be hiring could be sales could be partnering could be information obviously there that all gets amplified my answer would be this there are uses of LinkedIn that might appear anodyne to a lot of snobby outside observers but are super useful to people who do them and I think LLMs will be the same so people in poorer countries they wanted to write a business plan for them the business plan will sound to Mackenzie like to please a lot of people who think they're better than that but in fact it will be super useful yeah I think that's true and I also think that again in the human amplification look I think it's like oh look you write the business plan I don't need to it's like well but but you adding to it will make it a lot better yes but I think also your LinkedIn background it makes you more sympathetic to a partial subscription model which may be the future for LLMs well it's definitely a future for sure and what percentage don't know could be 20 could be 80 do you think subscription is the economic future of LLMs for the next 10 years well I think it's definitely a future but by the way LLMs as has already been announced will be used to generate advertising you're allowed to use hindsight here but as a talent scout yourself how do you think of the strengths of Sam Altman in doing what he's done? look I think this is an amazing gift to the world by Sam and the entire team Sam I think assembles great people and helps them with high ambition I think that's one of the things that is you know under described about Sam I think that he also he doesn't try to make himself the hero role he catalyzes other people it's one of the reasons I think he is also one of the good people to be leading the kind of safety thing because unlike you know a set of people who tend to have Messiah complexes you know it's only safe if I bring it to you he goes and gets a number of people involved and and doing it I think that's another strength as part of it and I think he his his ability to think super big has been helpful here I mean he frequently thinks something is going to be here tomorrow where I disagree with him I don't think it's going to be here even he's younger than I am even in his lifetime but like that ambition is awesome open AI right now I think they have about three hundred and seventy five employees during the critical breakthrough period of course they had even fewer is that a new model of some kind or is it the old model but the it's the alliance with Microsoft that makes everything work mid-journey I've heard is like 11 or 12 employees which is crazy right yeah look I think one and Instagram when Greylock funded it was 13 employees right so it's it is a model of generally it's an amplification of the general software model where you can't have very small teams that produce things that are our comedian lovers that move the world now you do need in all of those cases massive compute infrastructure so like AWS existed for Instagram and so forth like it's so it's that kind of you so you you need that in order to make it happen but a small team of software people can can create amazing things how is higher education going to change and exactly who or what will do it well as you know higher education is very resistant to change it actually is believe it or not yes and so you know and yet it should be changing it should be reconceptualizing its way that it amplifies young people you know it in it launches them into the world and it should be providing LLMs that are tutors and helpful it should be having LLMs that are helping professors do research and communicate with each other you know like AI and doing all this stuff like it should be embracing all of that with full force and yet most of it is is is is I think ignoring what's currently happening what actually breaks in the system because of that who rebels well you know it's easy to read the tea leaves of the future in the past you know Michael Crow at ASU you know doing amazing work I think he will trailblaze Ben Nelson at Minerva we had him on our possible podcast like I think these folks will eventually get other people to say this is where the world's going and it's really good and so students will switch to the institutions that are doing a better job yes and you think that will and the network effects are not too strong to stop that no here's a general question quite removed from the world of AI I've discussed this with Patrick Collison a fair amount it seems to me that after World War two most of the Western world maybe all of it we simply stopped building beautiful neighborhoods there's plenty of beautiful individual buildings artworks music whatever but actual complete neighborhoods as a whole they're now basically boring in mediocre even if they're very pleasant to live in why did that change you know maybe you can challenge the premise if you want no I maybe look at it I I don't know if I would speculate it's because it's the general kind of industrialization that makes it you know hey figure out what what is the thing that is closest to what most people want and produce a lot more of that right maybe it's that but medieval towns in Europe they're beautiful there's a certain sameness to them but we admire the beauty all the more so it doesn't seem it seemed that it's sameness per se the lowering the aesthetic quality well it could be production costs right and that's part of the industrialization like it's like the now how do we how do we produce each one at a at a kind of lower marginal costs I would I would help that what we will see with like for example I was literally talking to someone last night who was creating a speakeasy for their house and what they did to work with their designers they went on to mid-journey and they created a whole bunch of different images and in the range of creativity like I hope that is what our future is and that's what I'm that's what I'm trying to beat the drum on to get us there what is it about our current culture in America putting aside politics but culture that concerns you most culture well I would say and obviously ties the politics a little bit but I I think a culture that says we should have civil discourse to get to reasoned arguments and information which obviously include science about what should be is the thing that is like what you know what kicked us off from the enlightenment and from the Renaissance and it's important to to keep that in our fundamental bones and genetics and we are straying from it in very very dangerous ways and it's not just on the crazy right stuff with you know election nihilism and all rest you obviously see that in woke ism and everything else too and I think that's where you know I think the the two sides of this both left and right would be surprise for me to say you're in this respect you both have the same disease and we need to be talking about like how do we reason our way to like kind of truth and understanding and that that super important it seems that a lot of mental health indicators have become worse in this country maybe all the more so for young people why is that well I don't fully know I do think that we certainly think the indicators get worse is it because kids are always connected to kind of like it's like a little bit more Lord of the flies and I was connected to other kids is it because they have the insecurities of seeing you know of being amplified like cyber bullying fought following you into the home is it because the technology is not built the right way to try to reinforce mental health I think we can do that like part of the thing is how do we help provide support like you can use a I to help provide support on this I think it's a good thing to do whatever it is it's an important thing for us to work on now we're sitting here in the suburbs in Menlo Park but will a I save the San Francisco taxine or is that just going to vanish because of poor governance well I think in many ways San Francisco is doing everything it can to to to to self-emolite on the taxine but there's a major triumph says of late right yeah they're in the city yeah they're not on sand hill road yes yeah but it's it's it's throughout the entire valley but yes open a I is amazing and I do think that there is network effects to out of all Silicon Valley and you know my advice to San Francisco just as my advice to many of things is is try to channel the stuff that's going on here to help all the rest like for example is like look don't try to resist the tech industry being in San Francisco try to channel it to helping with you know the various problems whether it's homelessness or crime or other kinds of things and and to try to help those problems because you can like example you could use cameras to help with a whole bunch of the crime problems 10 or 15 years ago it seems we had so many tech CEOs either in their 20s or possibly even teenagers seeing considerable success it doesn't seem we have people in that age range anymore like Sam Altman he's I think 38 maybe 37 so why are CEOs older now the more important ones what what's changed well look I think we will see some new additional young folks and look the history of the kind of status quo is is as CEOs tend to be older I think it's it's the younger CEOs that tend to be the the new startling companies I mean remember there's not just Sam Altman there's also Patrick Collison there's also Brian Chesky there's also those sorts of folks and they were CEOs when they were younger and I am confident there will be a new crop of them before too long but what if it's the case there's less low hanging fruit the abilities you need are more synthetic social networks are more important this will favor the 35 year olds rather than the 19 year olds could that possibly be true it's possibly true I mean there are different industrial cycles where you know you have to spend more time building up your position to get the capital credit to be entrepreneurial bold in charge etc that's definitely been cycles of that in history so I don't think that I don't think it's impossible but I do think it's a little bit like you were gesturing with small groups doing stuff with software because you can have small groups doing stuff with software you'll still have young CEOs young founders she or he will still be new blazing entrance into the into the world change leaders and the Bay Area as a whole you think that will remain as important as it's been yes categorically yes if there's a tech startup scene that is currently underrated in the world or in the US where would that be well I'll say something mildly provocative just because it's entertaining not Miami since there's this whole crew that's like Miami is the future and I I think the network effects of talent and everything else is is much more here in other places I think Austin is doing really interesting things I think New York's doing interesting things I think London's doing interesting things surprisingly I think there's interesting things in Paris and Berlin Sweden or yes or no Sweden yes you know obviously Spotify and a bunch of other stuff I mean they punch way above their population weight but since they're a small population they don't tend to have a lot of immigration I tend to think you need those to really get the the fly wheels going any hope for Poland plus Ukraine or you don't see it yet I hope for it I don't see it yet and obviously you know there's other difficulties that are impeding right now but it's say it's entered in Poland people from Russia and Ukraine they go to Poland Poland becomes a new center with talent basically from three nations were totally possible how much do you worry about low and declining fertility as a social problem for the West for East Asia well so one thing that I thought about writing an essay on maybe I still do it is it isn't oh god the robots are coming for our jobs it's oh god can the robots get here soon enough because when we get to like our whole system has been based upon the fact that we have a growing population so that the growing population can take care of the elderly if you don't have that you have a serious reorientation of our entire society I mean China's going to run in that in a huge way and so forth Japan's probably trailblazing you see a little bit with the care and robots and everything else and so I think that you know it's like we desperately need the amplification in order to you know not create a massive burden for our children if that trend continues but let's say we can afford it because of something like robots or AI doesn't that in a sense make the problem worse we feel less of an emergency so South Korea there is 0.8 just keep the clock on ticking eventually they basically don't have people left so how can that work out well for us and the fact that someone pays the bill for our collective extinguishing of the human condition doesn't reassure me I think in various ways we can cause I don't think you know obviously you can do the math and go to diminishing zero I think we will both do will do various forms of incentive stimulus but other things I think we can get it back to at least a replacement rate among other things we might say well look actually in fact being a a parent is a paid job just because we think that that's a an important thing as a society and we can afford that from from you know kind of the the productivity increases are getting from AI robotics so we use the robot surplus in essence to pay families for that to be the second or third job in the family yes exactly yeah and politically you think that will be super popular people hated or I think we could get to a place where it would be popular I think right now it would be considered to be science fiction and strange but if like a replacement rate keeps going down then I think people say oh no that they make sense and a lot of science fiction has come through yes no this reason you and I both love science fiction and trade recommendations you know on a regular basis as a mobs three laws how good were they I think they were really good although they were out of conceptualization for a target if I were to update them and it's a little bit like you know to to reveal my my nerdishness Giskards zeroeth law but the but I think that what you really want in it is to parallel almost a Buddhist sense of the importance of life and sentience and that that's the kind of thing that you want in the in if you're creating really autonomous intelligences I think they kind of the uncle Tom if it really isn't a totally autonomous being hence being careful but going into it you know a new form of robot slaves is is is perhaps you know not ultimately where humanity would want to be there's not enough stress in them I think and what the robots are obliged to believe so a robot is free to believe something crazy and then act on it yes that seems to me the biggest weakness of the laws at least what you see in the stories yeah and hence the alignment with human interests around like how do you how do you amplify the quality and value of life is I think a very good thing what's an underrated science fiction novel that maybe our listeners readers don't know about well there's lots you know another one that I've been rereading recently because I think it's it's it's good fun but also raises good questions in a simple fun format is Martha Wells Murderbot series I think it in itself does not really address those questions very directly but it raises them in good ways like persons and being a thing versus a person and so forth within a kind of a classic sci-fi romp I've been rereading Ursula Le Guin the dispossessed and I'm amazed how anti utopian and almost right wing it is yeah utopian society is a kind of nightmare yes well and look it's partially because we need to have diversity in the human species like how do we that's it's part of how do we enable as much diversity while like you know not allowing it's it's that the diverse of creative expression part of like like freedom of speech and why you know it is valuable is that diversity of human of craziness that also creates genius what's a game you've been playing more of lately and why I haven't really had a lot of time to play games because this the AI stuff is occupying a total amount of time I have a stack of games without their shrink wrap taken off that I'm hoping to get to I find the AI stuff it's totally erect my calendar I had a year planned out that I could just do a whole bunch of other things and now sort of every day you have to keep up with AI you have to learn is like this doesn't work anymore throw up my hands and I feel a bit behind on everything yes although by the way there will be a chat bot for that that's good what's a non obvious problem we should be worrying about more well I mean I think because so much of the discourse in the press around is around the macro things is you know AI in the hand of hands of bad human actors and there's a range of bad human actors so I think that's really important I think also the question around like people tend to go oh wait a minute the people who have the AI will be amplified it's like how do we get that AI like the most natural thing is to pursue where the money is well how do we get AI in the hands of like lower income students in school districts and all the rest to make sure that it's there and provision it's one of things that love about open AI the accessibility of chat GBD but like how do we get as broadly and enabled as we can as I think another important one let's say you're advising a small but tech advanced nation Singapore Israel would be two options would you tell them they should build their own LLMs it will cost them a lot per capita but they'll have their own LLMs I don't think they need to but I think they should get involved and perhaps work with the providers of LLMs to make sure that there are LLMs that fit their needs that doesn't necessarily need to be that they need to build their own but I say hey we need to make sure that we have LLM provisioning for our companies and our industry and our citizens okay let's make sure that happened whether it's the you know we spend billions of dollars to build the one ourselves they could do that certainly no no nothing bad in doing that but they should make sure that their industries and their citizens are provisioned but say we have a strategic petroleum reserve for better or worse should Israel have a strategic GPU reserve don't nations such as the US get too much leverage over Israel if they're dependent on us from models and right now open AI is open but open AI can't control how our government right regulated our government might decide to use it as a foreign policy tool handed out to countries that cooperate deny it to countries that don't look I think it's it's an important gesture of the dependency but by the way once you have the LLM then the and it's you know as it were on the soil governed by your laws the ability of the US to do that is much less that's the reason why it's like well I feel somebody was that we need to build the computers and I do think like you know depth of compute is a strategic advantage as it's an important important thing to take a look at and you may want to say hey we got to make sure there's a certain amount of compute that's onshore that that that is then aligned with our interest of our country our society our industries etc but also by the way if you just are there's training and there's running and if you have the the the AI models and you're running them and you have that sufficiently within your study your strategic dependency would be a lot less so I think you have to plot that strategy with some care but I do think it's an important strategy to be paying attention to and I think for example we as the US part of thing I like about you know the kind of the the world order the US is yeah we sometimes do stuff that throws too much stuff too much to our advantage that's a problem but we also try to provision a lot like we we you know try to raise the rest of the world and I think we should continue to do that as you know in EU law there's a right to be forgotten but that is arguably inconsistent with current LLMs you can force a new training run by saying well you've got to take me out of the current system but a new training run costs a lot of money and to have lone individuals raising their hand say oh the model has to forget me that's just not going to work yep so legally where do you think the EU will end up on all this well I think if the there's a smart EU dummy you and which one is up to them smart EU is say look what we need to do is we need to be dealing with the function of what are the kind of culture and society so we say well we want to make sure that a that a that a that these AI tools have the right judiciousness in being asked about individuals that's our particular culture so we say okay you have to at least have a meta bot that could interrupt the query and interrupt the query in some way and that's that's what that would be if if that was our expression of culture and being tech forward and how you do it the bad one to say no no you you look a little bit like what the Italians are doing with chat GBD well you can't do each other he was like by the way you're disadvantaging your entire society and all of your citizens you're being let heights with the loom and the steam engine and so you know be innovative into the future yes with European values and European concerns and so forth but it's the steering into the future versus trying to enshrine the past and that would be less smart EU and you're not sure which of those will happen I hope they pick the smart one I try as much as I can I think European values and and and insights in the future something I learned from and value I want them to contribute that positively into the future and we'll chat GPT through VPNs just dominate China at least for some number of years or will they somehow force people away from doing that because it's you're getting Western Anglosphere information all the time right including about Tiananmen China everything else they have demonstrated with what they call the golden shield that they are committed to creating a alternative internet and alternative series so I think they will be able to do that and they even have it on the you know the control through VPN so you know we as Westerners go to China and we say look it looks works just fine with you and that's because they're not they're allowing our VPNs through where as local VPNs they they squash them but I've seen some numbers maybe they're not reliable but they seem to indicate there are more chat GPT users in China than in the US now they have a larger population but still that's a major effect that probably is happening now and do they just tolerate that and let everyone query it about you know Taiwan is a country or whatever they look in my view they should but I don't know what will they do and also what's more is I think they will be because these models have less ability to put kind of controls in them I think that will cause problems on even on short development and they'll be open source in China right once that's more of a thing yes so they'll just lose their attempt to sensor their own society or you think no no I think somehow triumph over everything they're very smart and they're very committed to the censorship so I think that they will I think it'll create create additional problems for them in so doing but I think they'll figure out how to do it before my last question just to repeat Reed's new book co-authored with GPT-4 is impromptu amplifying our humanity through AI a Wall Street Journal bestseller and finally last question Reed what will you do next? other than talk to dolphins yes like AI is going so fast there's a bunch of things that we didn't cover in impromptu so I actually think we will do another book and set a content around AI possibly within this calendar year which will be pretty amazing Reed Hoffman thank you very much thank you