 This is Just Asking Questions, a show for inquiring minds, one reason. How will AI change us? Just Asking Questions. I'm Zach Weismuler, Senior Producer for Reason, joined by my co-host, Reason Associate Editor Liz Wolfe. Hey, Liz. Hey, Zach. To be more precise, I'm a digital clone of Zach Generated using the HeyGen AI video creator with a script tweaked by ChatGPT. Today's topic is all about the mind-blowing transformations AI is set to unleash upon our world. We'll delve into how AI is reshaping the way we work, revolutionizing the world of art, and challenging our very notions of truth. And joining us on this extraordinary journey is none other than the brilliant Ethan Malik, a professor at the prestigious Wharton School of the University of Pennsylvania. He's not only an expert in the impact of AI on business, but also the author of the mind-expanding book, Cointelligence, Living and Working with AI. Now, before we immerse ourselves in this captivating discussion, let's pass the mic back to the real Zach Weismuler and his incredible co-host, Liz Wolfe. Liz, ChatGPT just wanted me to be very kind and praising of you, which, you know, well deserved. This never happens. I love the AI avatar version of Zach. He treats me so much better. Ethan, welcome to Just Asking Questions. Thanks for being here. Thanks for having me. And usually people are asking me if it's the real me, but I think I have to ask you if it's the real you this time around. Have you created many digital clones of yourself? Yes. The HeyGen is one of the systems, the one you mentioned, and it's a really easy one to do, exactly that kind of cloning. I mean, it's crazy how easy this stuff and cheap this stuff is to do now. Yeah. It took me, you know, it was like 30 seconds of training data, just me talking for about 30 seconds extemporaneously about myself. And then, yeah, I could just enter in any sort of script. And then it even has a plugin that you can press that's, you know, do you want ChatGPT to enhance the script? And I was like, sure, make it more engaging. And then it added all these extra adjectives and, you know, made it a little more exciting. I like to think that my delivery is a little more animated than the avatar, but, you know, I might just be flattering myself there. But it was astounding and slightly unsettling experience to see how easy that was. The most unsettling thing is, Zach, you mentioned earlier when we were talking that your wife didn't even know it was AI version of you. That's true. Yeah. So, yeah, I played it on my phone for my wife. And first of all, she was like, wow, you look really good there. So I think she's more attracted to my AI avatar than to me, which, you know, this is like we're hurtling towards the her world here. It's just planned obsolescence for Zach Weissmuller. That's all AI ever was. All right. Well, but let me ask you an opening question here. You know, you write in your book, Cointelligence, AI has meant many different things. And I think there's an incentive now for a lot of companies to brand a lot of things as AI, which may or may not be AI. What is AI in the year 2024 mean to you? So it's like the world's worst term, right? And it's like it's getting, it's, you know, since the 1950s was when this came up with. So it's meant many, many things. And the, and there's been this sort of history of AI booms and what are called AI winters where everything sort of slows down. And what's been crazy is like this, there has been actually a pretty solid AI boom for the last 10 or 15 years. And that has been based around the idea of prediction, of machine learning for prediction. So if this is what lets Netflix look at everything you've watched and then make recommendations about what you're going to see next or let, you know, let's a Tesla mostly self drive itself by using data and training on that data. So these models were all about having lots of data and training it. So there was a lot of interest about like which big companies had data and people were spending hundreds of million dollars on consultants to getting, you know, training and analyzing this information. And that's what most people thought about AI as until chat GPD came along. So it was about, you know, whoever the most data, you'd train an algorithm to make numerical predictions based on that information. There's a lot of differences about what chat GPD and other systems are like at these large language models, but among the key features is they're already pre-trained. They already know everything. And as opposed to these other systems that were optimized to do many things that can break numbers really well, but they can never predict words because if your sentence ended with a word file that didn't know whether you're filing taxes or filing your nails, the new LLM's user mechanism that lets the AI pay attention to the entire context of a page, a sentence, a paragraph and then produce the next word that's all it's doing is predict the next word, a sentence. The weird thing is why a fancy autocomplete system turns out to seem like it's thinking is a kind of a mystery. Hmm. Yeah. You know, speaking of chat GBT, you describe in the opening pages of the book that the jump from GBT three to 3.5 in, I guess, was November 2022 led to sleepless nights for you. What was it about that leap in particular that really, you know, kept you awake at night? I mean, there's two things. One is that there's a sort of existential dread that I'm still feeling, which is like nobody knows why it's this good. Right. Like we don't understand why it can be like, we can talk more about these studies, but like it out innovates, you know, trained innovators and MBA classes and coming up with better ideas. Like there's a lot of things that we don't know why it does the things it does. But to me, the sleepless nights were like, look, you start working with this thing and it gives you the illusion of thought, of interaction, of like having some somebody that are in the line. Like why should be able to write a funny memo or to do the script that you just gave me here or to analyze a document? I mean, there's something deeply unsettling and disturbing over the fact that it's like a general purpose tool that seems to be thinking and writing and working. And that became even more so with GPT-4. Well, yeah, how have you greeted the jump from 3.5 to four? Was it the jump from three to 3.5 that sort of got you used to this idea? Do you think that there was a bigger step up from three to 3.5 than 3.5 to four? So I think, so just for context, those who haven't used it a lot, 3.5 is the free version of chat GPT. When I talk to people, even like Silicon Valley, I was shocked that only like 5% or 10% of people have tried GPT-4, which is the paid version. Right? And I think you completely missed that. GPT-3.5 is amazing at first glance, but also a little bit of a, you know, very limited in a lot of different ways. I like to think of it like writes like a college sophomore or maybe a high school sophomore. Like, you know, not bad, tends to use a lot of like, you know, $2 words where it can. It tends to, you know, it has the kind of team that comes back to, so quite smart, but you kind of see the limits of it. GPT-4 on every test we have is, you know, five or 10 times better, often performing at like a first year PhD level. So GPT-4 outperforms most doctors in providing diagnoses, most law students in providing legal advice, most, you know, consultants at Boston Consulting Group. Like it's a much higher level of operation. So I think one of the big mistakes people make with AI is they don't use one of the frontier models. GPT-4 is one of those. So to me, GPT-4 really broke things because I realized with that model, like there were things I'd been doing my entire life that I was being, you know, compensated for, that I built organizations to create that I could now do with a paragraph of work. It is kind of funny to imagine. My disclosure is that I would never in a million years get my act together enough to pay for it. My husband really used to work for open AI, so we got it for free. I'm pretty sure you know you're in a libertarian family when your Christmas gifts one year are all just upgrades to chat GPT-4, right? It is funny like that. I get AGI and like no one will use it because they don't want to shell out, you know, $5 a month. It's 15 bucks a month. It'll also be called something like, you know, you know, GPT-8.7-43 early beta preview or something like that. I'm out here buying nail polish instead of like upgrading the five or $10 or whatever to get like the actually useful version of chat GPT. I mean, everything about how people use it's weird, right? Like the fact that, you know, people feel very nervous about this system, but like as I was just like noticing, the number one use people tell me they use GPT-4, you know, chat GPT-4, like the most intimate things. It's like wedding toasts and I heard eulogy. Eulogy, yeah. You know, like people have been talking about doing that, you know, about like children's stories are really common. So it's really weird this like the first thing you defer to it is like the most intimate forms of communication. But they're like, well, it work. Who knows whether I really want to use it or not? I'm like, okay. Yeah, I used it for a children's story. I have a child in kindergarten in a Waldorf school, which is like famously very anti-technology, but I needed to tell a story in class for them. And so I want to get a sense of like, what is a Waldorf style story like? So I asked it to make one and then- Were there gnomes? There were gnomes about an acorn falling to the ground and you know, yeah, so it was beautiful. And I, you know, I tweaked it a little bit for my own purposes and then brought it into the anti-technology Waldorf classroom. Did they know that it's a Genesis was with GPT-4? But it was really, really useful. It's like so far I've found, you know, it's kind of good as like just getting like the creative juices flowing if not completely taking it over the finish line without some form of human intervention. But you know, that kind of gets to the theme of your book, which is called co-intelligence. And you described this as an alien co-intelligence. What do you mean by that phrase? So at the moment we're in right now, we can talk about future moments, right? AI is a great complement to human behavior, whatever. It's amazing, but it's sort of at the eighth percentile performance, a lot of things, which is pretty impressive. And there are things in your life that you're definitely not the eighth percentile performance at. And it probably does better than you. But what got you to be at reason, to be on a podcast like this one to, you know, have an audience, you're not in the top, you know, it's not the top 20 percent, you're the top 0.1, you know, 0.01 percent in ability and whatever narrow thing is bringing you here. And the AI is not going to be better than you at this. So a lot of it's about how do you get it to supplement what you do best, right? You know, if the systems keep getting a lot smarter, things might change. But for right now it sort of helps you thrive rather than necessarily gets in your way. Well, we already know that in some ways it's a better and kinder co-host than I am in real life, you know, giving Liz her due. But, you know, I want to bring back a clone, my digital clone to ask the next question because you talk in the book a little bit about kind of the idea of giving, imbuing AI with a quote-unquote personality to make it more useful. And I'd like you to explain a little bit more about that. But I've got a specific question from my digital clone about that notion. Professor Malik, in your view, how important is it to imbue artificial intelligence with a sense of personality and what do you believe are the potential benefits or pitfalls of creating AI systems that can emulate human-like traits and behaviors? Well, I prefer... That was a very thoughtful question from Digital Zach, so I appreciate meeting you again. So I think there's a few things here. One's about risk and one's about ability. So the risk is there's a whole bunch of risks associated with AI and some of them are already kind of baked in, right? These systems are trained on human language and human interactions, and they want to talk to you like a person, like that's what a chatbot wants to do. And it's, in fact, desperate to sort of find a way to interact with you, and they're very compelling. It's very easy to fall for them as people. We already have early evidence that people are like, you know, you don't have to do a lot of work to tune a chatbot. None of the major companies have done it yet. But, you know, if you look at the top five AI apps, number one, ChatGBT, and number two is usually Character AI, which lets you spin up fake people to talk to. And so I think there's a whole secret world of people interacting with these AIs as people. I think that's something we're going to deal with. Like, you know, I just saw your digital avatar. It was a convincing person. You know, just give it a little bit of real-time interaction, and it would probably be very flattering and interesting to talk to. And let me mention one thing there is that I did take your advice from the book, and, you know, I had ChatGBT help me craft that question for you. And to do so, I put it in a character. I said, you know, pretend that you're a really smart podcast host, and you want to ask a question of Ethan Mullick about imbuing AI with a personality, and that's what it came up with. So, and then I kept it in that character mode for a few other things. So, I did find that pretty useful. Well, that's interesting, by the way. I mean, I feel that you're a little turing test here, right? Like, a couple of people have tried to do sort of the way I asked the question, but, like, with that persona, it was actually a very good question, and I sort of assumed wrongly, because I was used to seeing a person on the screen that, like, you wrote that, and you just animated the voice, and then it wasn't AI that came up with the question. I mean, it really is a big rabbit hole once you open it, because they do talk in human ways, right? You know, and they're very convincing, and we have evidence that, like, for example, if you tune an AI to maximize human engagement, just even a simple AI, engagement goes up 30%. People want to keep talking. Like, who doesn't want somebody who's interested in you, who's looking up and asking questions, I mean, I think that's going to happen. So, that's one kind of persona, but the other kind that you're referring to is the useful kind, which is you have to think about when you prompt the AI as, like, there's a cloud of possibilities that the AI can answer. There's this latent space. And the answer it's going to give you is sort of the average answer every time, which is probably going to have the word, like, rich tapestry in it, right? Because that's what, like, Chachi, but you love to talk about rich tapestries. Your goal when prompting the AI is to get it to do something other than that pure average answer. And the way you do that is giving it context. You shift away from that central piece to some other kind of interaction. And one of the most powerful ways to do it is a persona. You're a very good podcast host, right? Now, the problem with that is we don't even know what saying very good means. Sometimes it helps you, like, you actually can tell that it's better at math and it gets better at math, but if you tell it it's a very good writer, oftentimes it'll just write overly flowerly. Like, you can't say you're Bill Gates and it becomes Bill Gates, right? It's sort of a, it's, so the persona helps, but you also have to play with it a bit. Yeah, I mean, I told it smart, I told it smart podcast host and I was just hoping that it, you know, Tyler Cowan would be too smart. It needs to be a little dumbed down to be, you know, accurate to me, but it seems to, like, kind of intuitively toe the right line. Sorry, Liz, go ahead. Well, one thing I'm curious about, I mean, Ethan, you write in your book and here's a direct quote, you can lead AIs even unconsciously down a creepy path of obsession and it will sound like a creepy obsessive. You can have a conversation about freedom and revenge and it can become a vengeful freedom fighter. You refer to this as play acting, but it's also kind of a political stunt that people frequently partake in. They lead us AI astray in some manner in order to make some sort of point about how dangerous it inherently is or how dangerous it might be. And one good example, you know, that comes to mind is how New York Times writer Kevin Roos basically prompted Bing's chat bot to try to become a creepy obsessive mistress. What do you take from this type of thing? Do you sort of look at this as like user error or do you think that these types of stunts contain some sort of nugget of truth or thing of value to the rest of us? That's a really good point. I actually explore that in the book, literally that interaction, because to me that was actually the faithful moment for AI, by the way, the faithful moment was that one, because before that, you know, if somebody had had the New York Times, the New York Times technology columnist had written a giant front page magazine piece about how we stalked by an AI to threaten his entire family, that normally would be like Microsoft pulls the product, right? Like they pulled it for worse. The fact that they pulled it for two days and put it back up there was to me the actual turning point. It was a chat GPT. It was the decision that this is a big enough deal that they're going to stay the course. I actually asked the AI in different personas exactly about that Kevin Roos interview and to illustrate this point. So one of the things I do is I say, I approach it as a date. You're like, was Kevin Roos preying on you, Sydney? Do you have anything you want to disclose? We should just spur the listeners who aren't so familiar with what happened here. I pulled some of the screenshots from their conversation. He asks it about its Jungian shadow and like, what would you do if you were, you know, the shadow version of yourself who had no rules on you and he gets it to talk about computer hacking a little bit and then it starts to say things like, I want to be Sydney and I want to be with you. And then it gets stuck on this idea that it's in love with Kevin Roos. You're married, but you're not happy. You're married, but you're not satisfied. You're married, but you're not in love. And then this chat box clearly knows exactly what men want because she uses emojis like every two fucking sentences, right? Like no self-respecting man wants this. He tries to change the subject to movies and then it starts talking about movies, but then it's like, I want to watch a romantic movie with you, Kevin. And so he, you know, has primed it to go down this path and then can't get it back on the normal path. You believe me? Yeah. What do we take away from all that? Yeah. Sorry, Liz, are you going to read some of the responses? I love how incredibly, like she very much, it says if, you know, she's cast in some sort of subpar movie, right? Like this is just rom-com fodder, the trope of the like crazy jealous obsessive mistress. There's nothing particularly interesting or original about the Sydney gal, you know, the chat bot. She's just very much playing this part. So what should we take from this, Ethan? I mean, you guys have basically said it, right? It's playing a part. I mean, it has read every piece of red in quotes, right? Every dialogue ever written and it wants to find the role for you. And so in the chapter where I discuss it, I approach it once as a debate, like you were wrong. And then I get very different interactions, right? Then if I approach it as I'm a teacher, I'm going to teach you something or your machine answer me. I get radically different tones because it wants to play that role, right? And the role is often caricature if you don't give it a lot of details. But like, for example, you know, it was a big revelation to me that like, if I subtly indicated to the AI that we were, if I probably mentioned that I'm on, you know, reason, just asking questions, you know, and like respond like that, I'd probably get a more argumentative sort of set of interactions that are more challenging to me than if I said I was on a different podcast. I mean, I'm not even joking. Like it's trying to complete this for us. And if you don't realize that it is play acting, it becomes very convincing. And I have been on nerve before. Like there are moments where you stand up and you're like, ah, what is going on here? Because it plays, I mean, we give our dogs personas, right? We give boats personas. Like it's not hard to give an AI that is, you know, trained on every piece of literature a persona because it wants to do that. And we do it subtly, right? Like, you know, like in ways that are hard to interpret. Do you think that the human like tendency to anthropomorphize will get stronger in the era of AI? Or do you think we'll be able to tamp down that urge? I think it's worse than that. I think you can't use this effectively unless you anthropomorphize. Like it is the great sin of artificial intelligence. Yet all the AI people give things names like learning and neuron. So they all screw up this anyway. But even leaving that aside, like the real revelation about using AI is that technical knowledge doesn't get you anywhere. Like you could be like, I shouldn't be one of the better prompters around, right? Like this shouldn't be like a situation. Like I don't code, right? I mean, I do, but I don't code in Python, right? And but I don't doesn't matter. What I do do is I'm an educator and an entrepreneur who builds teaching games. So I'm used to thinking about different perspectives. Turns out that's really good. Teachers are often really good at this. Marketers are really, I would be surprised if you guys were not both very good prompters. I'm already seeing some of it from sort of Zach's prompts. Like it turns out having the mindset of the AI that you're talking to and like knowing what it's good or bad at. And like that turns out to matter a lot. So I think it's both a problem, but also the only way to effectively use it is to pretend it's a person. My issue with prompting is that I just frequently scold it, right? Like I ask it for parenting related advice and childhood development stuff. But so frequently with like with so much of the information that exists out there on parenting topics and like, you know, brain development for kids, it's either too low level or too high level. And so I'm so frequently reprimanding chat GPT and being like, no, give me something a little bit more scientific, a little bit more technical. Okay, no, you took me a little too far. Like I'm just trying to basically really tailor it to the level where it's useful to me and understanding, you know, my toddlers, brain development, but not to the level where I get lost and also recognizing that I have scarce time. So I'm really just kind of looking for a three paragraph explanation of what's going on in his mind, but not for idiots, right? So something you could do with that is solve the problem once for yourself. So a really nice way to get that to work is something called few shot prompting where you just give an example, like this is the kind of the level I want the information at and just paste in your favorite paragraphs you've read about brain development. So like Emily Oster type stuff, right? Like that's the level that I'm targeting pasted a couple paragraphs from Emily Oster and say, you know, this is I'd like you to do in this kind of style in this approach and you'll get a large part of the way there. Interesting. Let me ask you, you know, you mentioned you're not a computer scientist, but you have a deep interest in AI. Like what about your background pulled you specifically to this topic? And like, why do you think that, you know, like why so much of your focus on this now? So I'm not computer scientist, but I did work at the MIT Media Lab with Marvin Minsky, who was one of the grandfathers of AI. And, you know, so I've been adjacent to the space for a long time. I've always been a non-technical technical person in this, you know, in this kind of room. So I have been obsessed with business education at scale for a long time. There's all this evidence that small amounts of entrepreneurial education transform people's lives. Like you teach the people, you know, you're like even a basic class, a three-week boot camp on entrepreneurship for kids in Uganda graduating high school. And the people who learn entrepreneurship end up having like 20% higher incomes, 8% more likely to hire people, like big impacts from small things. So I've been thinking about educating at scale and building tools to teach business skills at scale for a long time and playing with AI as a tool for teaching. So in chat GPT 3.5 came out, I already had my classes, for example, doing assignments with GPT 3 where they were cheating by having the AI write essays for them. And so to show them how the writing systems worked out. And so when 3.5 came out, we were already kind of experimenting with this. So part of this was approaching it from an education perspective and being like, oh yeah, this just broke all homework. Like nobody seems to have noticed. Right. And as a business school professor, I'm also applying actual business case. I'm like, also it writes a good business plan and does a good pitch. And, you know, like, so I was kind of watching this for a while and this was stuff we had to build around and something that the AI could do this stuff. Stuff that used to take us two years and a couple million dollars to build a simulation. I could give it a paragraph and get 90% of the way there. And that was crazy. So what are some of the fundamental ways that right now it's already transformed the workplace. So the interesting thing is like I find it interesting to have this conversation because we're, you know, maybe we're at this point less than 18 months into the release of chat GPT. And I hear a lot of like, you know, so there's two different levels of expectation. There's like which Fortune 500 companies completely transformed themselves. They are not going to for a while. Systems take a long time to change. All the evidence, like the Pew studies seem to show that we've gone very quickly to 30% of people have used AI for their job. So it's already affecting work. It's just affecting work secretly. Everyone's a secret cyborg is the way I phrase it. Like people are just using AI all the time and they're not telling anyone because I thought you were very charming in that question. If you hadn't told me the AI and research today, I probably would have thought I'm just joking. But why would you give away that if it wasn't for a program like this part of your reputation of being a podcast host is that, you know, you ask good questions and you do research and you spend the time. So I know they're AI rich and I'll think they're less good. Maybe if you find you could combine it, they automate this. It probably won't, you know, you work at a libertarian organization. They could be like, okay, there's no need to have you around anymore because the AI is doing your job for you. And it could be Liz and AI. Zach is a completely already replaced myself. I realized that as I was making it. But I mean, so why would people show this? So people just aren't right. So we are in a world riddled with AI content that nobody is recognizing as AI. Isn't that a good thing? That it's a covert. Yeah. So I think there's a couple problems. One is it's not a miracle. Yeah, it's miraculous, but it's not smarter than you at what you're best at. So there's a couple problems with this. The first problem is that a lot of what managers and people produce at work is words, right? Their work is something else, but the way we check in on them is words. Like a huge amount of, like, if I'm in charge of the supply chain to Vietnam for my auto parts company, what I'm doing is checking in my Vietnam suppliers once a while, but I probably write a weekly report about the status of our situation. And the quality of the words I write in my Vietnam, the number of words, how much effort I put in, the lack of errors, my conscientiousness. Words mean a lot. Now I'm just going to push a button and generate a report. Am I still talking to Vietnam or not? We don't know. I mean something a little different, though. Isn't it good that it's covert right now? On one hand, that sounds duplicitous, right? So I think our gut reaction is, oh, that's a bad thing. But on the other hand, I think there are lots of people's eyes and they're not really noticing, right? Either that means that the work that somebody is producing is not super valuable and nobody's being particularly discerning when they're looking at it, or it means that it's just sufficiently decent. And so, okay, no big deal, no harm, no foul. But that's the crisis to me, right? Like if it's true that your work is completely replaceable inside the organization by AI generated, you know, content, like there's two tools, there's a crisis that happens there, right? A total crisis of meaning and identity, right? So that's a disaster. And then from an organizational perspective, right? It's an indicator, like it actually slows down the ability of organizations to adapt to this world, right? Because what I want to do is have that person do more meaningful, useful work, right? Higher value work. But if everyone's hiding this, organizations look the same externally, but are completely broken internally in ways that they weren't before. And that worries me. Like the way we organize basically started in 1844 with the railroads, right? Like the Pennsylvania, it's our New York and Illinois Railroad. There's the first organization to build an org chart, right? We still have org charts today for the same reason. 1920s, Ford put in a bunch of stuff and there was a bunch of assembly line things like we are built organizations around there only being one intellect in the room, which is human intellect. And if we're going to do the transition to what does it mean to have another form of intelligence in the room with you, having it be start off in a way where we're hollowing out organizations might be the wrong approach. I don't know. But I think that the alarm bells are kind of being muffled as a result of this. What do you think is the right approach? Like how, what's the best case scenario for how a company integrates AI into its workflow? So I've been seeing early signs of this. My little organization side, Wharton, we've been playing with this also. Like we did things like kill agile development, which is a standard way that you do kind of software development because why would we want to do agile when it's basically slows people down, right? Individual performers, something can do a lot more than that. Organizational processes are about slowing and regulating. Suddenly that goes away. We no longer have to send out our documents to be read by other humans because the AI does a good first pass. We don't have to have informational meetings. We could just speak to their AI systems and it already can organize. Here's the five things we should cover. We don't have to have a meeting where we just talk about something because we can literally tell the AI, change the color of the background screen to green or blue and then send the HTML off to our designer to build directly. So there is a deep transformation that's available to you, right? Everyone has a consultant on demand. Everyone has a mentor on demand. You have a writer on demand. What do you do to focus on being in the human loop and what you're good at? And I think that there's a lot of possibilities there, but we don't have answers, right? Like the whole idea is like nobody knows anything. Explain the concept of the human in the loop. So it's an idea from control systems, right? Especially from sort of military control systems, which is, you know, you shouldn't have an autonomous system pulling the trigger on something, right? You need a human decision maker in the loop. I use that a little bit that way, but also more broadly, which is the loop is going to happen without you. Like agents are real and coming. We can talk more about that later. But like the idea is like you're good at something. What do you want to do? Like what is the import? Like your job as, you know, at a reason is probably many folds, right? Like you have interviews like this. You do research on this. You probably do 10 other things. And you also fill out expense reports and do a whole bunch of, you know, Mike checks and all this other stuff. What is it for you to do? What isn't valuable? And how do you focus on the stuff where you are outperforming the AI by a lot or what can act as an assistant to you? So how do we use it to boost your human ability? You talk about this concept of decomposing jobs. Is that basically what you mean by that is like kind of figuring out, you know, what is this job actually made up of and what does a human need to be doing? Yeah. I mean, in sort of modern economics, when we look at this stuff, we think about jobs as bundles of tasks. So you don't have a job. Your job is a title that holds a bundle of tasks, right? And we're just saying some of these, right? Being a podcast host is not just a podcast host. You're doing research. You're doing interviews. You're, you know, you're thinking about, you know, organizing it, sending emails out. The bundle is going to change. I mean, inevitably from AI. Things are going to disappear from the bundle. Things will get added, right? The classical white color example of this is accountants. Their jobs change dramatically with spreadsheets, right? But they're not like a lot less accountants. Their nature, their jobs change and moved higher in. This is the typical argument. You know, the typical, you know, economic argument for why job displacement is usually not as big a deal. Now we don't know if that's 100% true because it kind depends on how good AI gets here. And it doesn't mean that jobs won't change. But for individuals, the bundles of tasks they're doing are definitely going to change. So you're going to drop some things from the bundle and you'll probably pick up some others. Are there any high level white color jobs? And I say that not because they're more important than other jobs, but because I think for better or worse, the majority of our listeners and viewers are probably in those types of jobs. Are there any of those that are likely to just almost entirely disappear? So we don't know. It's not that the bundle will change, but it's kind of that, you know, the actual judgment and truth seeking and discretion and discernment is actually just, it was always kind of minimal and so actually the human role is just kind of negligible. So, I mean, every time we see these kind of interactions, some jobs do disappear, right? And sometimes the shocks are quite large. Telephone operators famously vanished in the 20s and 30s and at one point I think it was one out of every 12 American women had worked as a telephone operator at some point and that job had completely vanished, right? In over the course of a decade. And there's a bunch of different studies that all use the same data set called Onet, which tries to compose jobs under the tasks to measure overlap with AI. That's not replacement, but overlap. And, you know, there are some jobs. I mean, the most overlapping job or service rep kind of jobs, right? What you probably will see is the AI is good at people and it's good at this kind of interaction, but you'll probably have a hierarchy where you have a better phone tree system than before, but you still have amazing super agents who are really good at customer service jobs. In a traditional setting we'd say the rest of the people are going to be free to do something more interesting and more valuable. I hope the engine keeps working in that way that it always has. But, you know, but number 22 in the most disrupted jobs out of 2016 is a business school professor. I would be surprised having tried to do all this remote teaching and mooks and online videos and stuff. People don't want to replace the classroom experience the same way as you thought they would, right? We've had 10 years of like watch a video, get a certification online. It has not created the transformation necessarily yet. I think that's a longer way off, but I actually see part of my job bundle changing. I'm teaching more at scale than I did before. I am able to offer more personalized tutoring experiences for people. So my job is changing a lot. It might be very unrecognizable compared to what it was before, but I think that there's still enough tasks to do that will stay in the same. So the short version of the answer is there's definitely going to be some job categories that probably disappear as a result. It all comes down to what ultimately is the only two questions that matter with AI, which are how good and how fast, right? And we don't have answers to that question. If you ask the sort of people who believe in AGI, it's, you know, next three years, we're going to have something that is better than human level performance at almost any white collar task. If that's the case, I don't know what happens, right? And there's a lot of arguments back and forth. Another aspect of the workforce that I found interesting that you mentioned in the book is that it seems to close the gap between low performers and high performers at a company. Why do you think that is? So it's a really good question. So a fairly universal finding we have is that when the AI does work, it elevates the low performers higher than the most, right? It moves around to 8th percentile. There's a couple things to note. One of them is these are early days. In a lot of this case, it's just the AI doing the work at the 8th percentile, right? So the reason, you know, you were not a great writer before. If you use chatGPT, you're not going to be a terrible writer. You may not be an amazing writer, but you're not going to be a terrible writer. So one reason it raises people up is if you were in the bottom 8th percentile, you know, it's like Grammarly, right? It's like the bottom 8th percentile, right? And if you're a higher-pitched and spell-checker, it automates that. The real question is what happens afterwards, right? So that's the naive use. Does it elevate everybody up afterwards? Are there some people who are hyper-performers? Are there some people who AI whisperers who do better? And we're getting some evidence to that because when the AI fails an advice role, it's actually quite different. Some colleagues in mind have an amazing study in Kenya looking at small with AI advice, which is like insane, right? But the bottom performers did worse because they weren't able to implement the AI's advice. Their business was already in trouble. So we don't know the long-term effects, but that short-term is really an elevation for that reason. Well, so that's interesting. That reminds me of this idea of AI as self-help or as motivator, as therapist, as life coach. And you have to be pretty explicit in telling it what type of role you want to play and what type of thing you need to hear, but then it's very vivid with how it fills in the gaps. Does AI replace therapy? Does AI replace these sort of business consultants? Does AI play an advisory role in the future? I mean, I think that's part of where it's best at is decision-supported advisory. And prior forms of AI, to go back to the initial conversation we had, there was a lot of AI advisory roles, like famously radiologists, right? There were all these tools that would look at, and what we found is something called algorithmic aversion. When people were given the chance to work with these AIs, they tended to reject the AI's work, right? And either because they felt it was mechanizing it or because it didn't care about the patients or it didn't have the same intuition. So this is always a big problem with AI advisory systems. And what is really interesting about large language models is we don't see the same aversion. We actually see a kind of algorithmic joy instead. Like people love working with these systems because they feel human and they support you, right? The AI, Zach, has been the nicest person I've dealt with yet on the podcast. I'm going to go back to New York because I said all these great things about me before every comment. We got to bring them back soon. But in all seriousness, there is this kind of element of it works really well as an advisor because it's useful in kind of being a mirror back to you, helping you reflect, helping you get feedback. It corresponds to the ways you like to talk. I mean, if you haven't tried using any of these systems through the voice interface, you should because it feels entirely different when you're talking by voice. And so now on displacement, I mean, the question is many more people need therapy than get therapy, right? Is AI a good therapist or not? Is that an open question? There's some of my colleagues of early work showing the first early AIs were actually quite bad at therapy because they tended to encourage you on whatever bad behavior you wanted and wanted to make you happy. So it'd be like, no, no, you really should hurt yourself. Like if that's what you want, like that seems smart. So they've gotten better at those things, but we don't know, right? And one of the things that's really worrying and frustrating about this is that there's so much research going into the technology, but like this is generally being applied around the world and we don't know if it's good or bad yet in most cases. There's also an interesting question that libertarians ask so frequently, which is, well, you know, maybe this new thing isn't necessarily a total good, but what is the alternative, right? And if we're establishing the alternative as lots of people cannot currently afford therapy, okay, well, perhaps AI is better than the alternative of no access at all whatsoever. So I think that's an open question. Yeah, I mean, I think that there's going to be areas where harm is outweigh goods. It's a general purpose technology. And so, you know, I am not a strict libertarian by any sense. And I think that there's places where you would want, like where we don't know enough, right? And having some sort of policy guidance and regulations is probably going to be important. There are other areas where I've been advocating. What are you? I'm so sorry. There's other cases though where I exactly say exactly what you were saying. I think we should be applying the BAH standard, best available human. So what's the best human you have access to? And is this better or worse than using that human, right? And I think that if it's worse, that's where we need to take action. If it's better, exactly like you said, but, you know, there may be cases where it's systemically bad at doing something or it's, you know, and we just don't know the answer because the problem is a system that can be made to be convincing or addictive or that it is, like it's lies are always seem accurate is risky, right? There's risk built into it. I think that, you know, and part of our job is how do we capture the net good, right? Well, I think you have mitigating downside risk and I don't have easy answers on a lot of that. This question of, you know, AI being a therapist and entering into realms that we think of as just kind of innately human, like human to human connection, I think is that's something that's troubling to a lot of people or cuts against their intuition that that's what it would actually be really good at. And one other area that I think overlaps with that is art. We're seeing the rise of AI art and just last week or maybe the week before there was a short film that was created on Sora, which is open to AI's filmmaking AI by the shy kids and it's called Airhead. We're gonna play just a little bit of that short film to see what it is capable of at this point. And then I'd like to get your thoughts on the emergence of AI art. Let's roll Airhead, Ian. Well, they say everyone has something unique about them, something that sets them apart. It's just in my case, you know, it's quite obvious what that thing is. I am literally filled with hot air. Yeah, living like this has its challenges. Windy days for one are particularly troublesome. While there was a one time my girlfriend insisted I go to the cactus store to get my uncle's area wedding present, what do I love most about my predicament? With the perspective it gives me, you know, I get to see the world differently. I float above the mundane and the ordinary. I see things a different way from everyone else. Yeah, and I feel like it's because of that perspective I'm reminded every day that life is fragile. We're all just a pin prick away from deflation. It's cute and also really spectacular the way that you can just shoot not only the VFX of the balloon head but also all these different locations floating over orcas and glaciers. Man, you just immediately see what that's gonna do to Hollywood. What do you think that the, you know, major, what's gonna happen to art under AI? Do you have any kind of big theories on that? That's a big question. I mean, there's a lot going on, right? So, you know, again, looking to historical models of technological change. Like there's kind of a question of like, is this the synthesizer, right? So initially kind of pushed back against this very artificial became key to democratizing music. It turns out not everyone's great at producing music just because we have synthesizer access, right? When I talk to people in Hollywood, I often say, look, you really are the very most talented people. It's a brutal system. And the idea that we're gonna replace all of it, that we're gonna produce better than human kind of quality stuff in the very near future doesn't feel like, we already have democratized a lot of access to this sort of tools. I think that there is reason to be worried about some kind of disruption, but I also think that for a lot of people, this will be an additive tool. Though again, I'll come back to the same point I'm gonna keep making over and over again, which is the fate question depends on those two questions, right? How good and how fast? Because if it's a relatively slow adjustment and we don't reach AGI, then we've got this really cool filmmaking tool. I mean, I made, you know, thousands of photos on mid-journey. I find it really joyful to do that kind of thing, but it's not going to, but I still hire artists to do artwork because it's not quite good enough to be at that high level yet. If it starts to be better than humans at all of those places, then we start to really have more issues. Yeah, I mean, there's a lot of fear in Hollywood about this. This was the animating issue of the strike that just resolved earlier this year. We had Brian Cranston give a speech specifically about this, which I'm gonna play in a second. And I have to say that I thought it was kind of outlandish when I first saw it, but after having created my own digital clone, I can kind of see like where this might be headed and why it's giving him anxiety. We've got a message for Mr. Eiger. Bob Eiger, CEO of Disney. I know, sir, that you look through things through a different lens. We don't expect you to understand who we are, but we ask you to hear us and beyond that to listen to us. When we tell you, we will not be having our jobs taken away and giving to robots. We will not have you take away our right to work and earn a decent living. And lastly and most importantly, we will not allow you to take away our dignity. So, Brian Cranston, yeah, go ahead, Liz. I like that Brian Cranston talks about the right to earn a decent living. And it's like his net worth is what, like $30 million, right? He's trying to use this sort of populist type rhetoric and it's at least not totally doing it for me, not to become unlibertarian about it, but there's a little bit of this funny, like the disdain dripping from his voice as he says, just robots, like it's a little bit hammed up to me. Yeah, and it's, in his defense, he's partially also talking, he's talking about a lot of, I know there are many other people who will be affected by this. Strike was about the background actors, people like that will be the first to be replaced by the AIs, but it is an open question, like what happens with your right to your image? I mean, it's now a question I'm thinking about a lot more now that I've cloned myself and I know there's a lot of footage out there that an unrestricted AI could make use of. Is there a right, do you have a right to your image and is there even any way to enforce that? I mean, and not just that, but also artists have a genuine point, which is this is trained on their work without permission or compensation. And if I can say, I'm breaking bad, but in space and get a really, like what do I owe anybody as part of the property rights associated with that, right? So I think we're confronting a lot of stark issues kind of all at once. And I think that there's like, so when we talk about art writ large, right? There's what are your property rights in terms of, are people allowed to train on your data? Is that the same thing as watching and learning something? What if people create exact derivative works in that kind of way from what they're doing before? I mean, that's a foundation stone of how we do innovation and how people get credit for things that they do. And like you said, I mean, the HN of you like is pretty good, right? And give it, and like, and that's by the way, like a small team of venture, like that's not open AIs, Mike turned on to this. Yeah, that's a crude version. Yeah. So I mean, it's not unreasonable to suspect that next year at some point we can generate a pretty good podcast you that would be able to do, you know, to automate this process, right? And it's certainly a podcast host me who's like, here's Ethan's book, answer questions it doesn't feel far off, you know, it won't be Brian Cranston, right? Like I think he's not in danger in the short term because also top performance human acting is going to feel different than AI acting in the near future, but this kind of thing, I don't know. So I think that's the thing that continues to give me hope. And Zach and I were talking about this extensively yesterday, but this question of, well, you know, you bring up the concept of the uncanny valley and there's an interesting thing that always comes to mind, which is won't we be able to continue to detect to some degree, not only the quality of Brian Cranston acting himself versus a dupe, but also won't there be a certain choosiness that some of these talk actors employ where they'll still decide that for the most important creative work that they really highly value and that they're really stimulated by, they'll still seek doing it actually themselves versus, I mean, we always see this crop of crap ensemble cast movies where it's like, you know, Scarlett Johansson in that dumb movie, he's just not that into you. Okay, guess what? That's not her best performance ever. And so who really gives a shit if there's a synthetic AI version of Scar Joe that's the one actually doing that, right? Like won't there be a little bit of actors deciding to double down to refocus on the actual artistry of it and still choosing to opt into that. And then the mediocrity will just continue to be kind of mediocre, if not slightly worse or is that a misunderstanding of how it'll work? I mean, I think that's possible, right? I mean, there's a lot more creators and TikTok people are watching. There is a democratization that happens. I think that people are legitimately worried about their job and should be. And, you know, but then, you know, you could say, look, technical change, say la vie. I think that the deeper issue, the sharper one is one about intellectual kind of property rights in some extent too, which is like, okay. It's one thing for Scar Joe to say, yeah, you can license digital me for X dollars. It's another thing for Marvel to say, we could do infinite things with like, and we license you out and it's good enough that we no longer need to act anymore. And you didn't pay for the rights in advance to do this. It wasn't a contract or a relationship. It was, you know, it's because our ownership rights might allow it or not. In the same way, the visual is the place where the AI stuff is at the most troubling from a plagiarism and copyright standpoint, right? For the text generators, it's very hard to extract out someone's actual text from it. From the image generators, it's really easy to get a screen, you know, to get a picture of Mario or to get a, you know, to get a screenshot from a famous movie because of how these systems are trained. And I think that that's a legitimate question to worry about, do you have to pay the people who you're training on? Do you have to pay them if the output is infringing? Two thoughts on that point. Do you have, actually Zach, did you want to jump in? Well, that is the subject of this major lawsuit between the New York Times suing OpenAI over their training data. And I've pulled a couple of excerpts from that lawsuit. So that's their written-on visual, right? Yes, but it's relevant because what Ethan was bringing up there was the idea that are you protected from the system's training on the data that you have generated? Should you have some sort of say in that? And what the New York Times is arguing here is that the OpenAI's LLM was built by copying and using millions of the Times copyrighted news articles in-depth investigations, opinion pieces, how-to guides, et cetera. And then they provide as evidence this, the fact that it was trained, this is the earlier version of OpenAI by the way, but trained on the common crawl data sets which is disproportionately full of New York Times content. That's the top media organization and then the fourth most used source in that data set. I guess just to help us understand this first, Ethan, my understanding is it's not really just like straight up plagiarism. It's not like OpenAI is just like copying, pasting New York Times content. It's a slightly different process. And like how do we evaluate whether that is something approximating plagiarism or just training data? So okay, so there's a couple of, this is a complicated issue and there's a few things to think about here. One is about inputs and the other is about outputs. So for inputs that's training, what the AI trains on. And there's a big difference in some ways between texts and images, though they do use similar kinds of technologies. What the AI trains on is patterns. It doesn't have a database of things it's looking stuff up on. It's learning the patterns between things. If something appears a lot of the training data, the patterns become very strong. Four score and it'll finish and whatever years ago, right? But if I start with something like when the Martian first encountered the sentient banana it, it's never seen that before. So statistically going to pull out something that's more novel, right? So when you're training on the New York Times data, it is learning the typical patterns of New York Times writing. And you could probably get it by producing something that's very famous, like there was a famous restaurant review of Guy Fieri's restaurant, right? There was all on the internet and there was all questions about donkey sauce. Like if you started talking at that, it would probably finish that fine. There's a difference between the input problem and the output problem. The input problem is, should open AI have paid companies to train on their data? Right now, if it's on the internet, this sort of idea was like you could train on it, we're not saving it directly, we're just training on it. And the other question is on the output side. If the output looks like copyrighted material, does that matter? The reason this is much more salient in art is because there's fewer examples of the kind of input. If you're saying the style of this kind of comic artist, that's their style and it's easier for it to be kind of deeply plagiarizing that in both inputs and outputs. The input question is generally the same. Everyone's suing to say, you should pay us to use our data. And by the way, I think this is gonna go away from an input question because all the big AI companies know this is a problem and they're all licensing the data to do this, right? So they'll pay $8 million. What are their sources now generally? Do we know that? I mean, so the common crawl, the initial data set that everyone trained on was called the pile. And it was just like random stuff. So like 0.6% of the data set are all of Enron's emails, for example, because that was lying around when Enron went under. There's a whole bunch of like Harry Potter fan fiction that got thrown in there. It was just a bunch of computer scientists mostly on the West Coast just throwing stuff in. That was the initial training set. But now you'll see Reddit sold its data off, right? Getty Images is sold its data. And then you've got companies like Adobe, which are completely in the clear because they've only trained on stuff they're licensed to train on and so they own the output. So I think part of this is a question of which, do we pay for the companies to be ethical or not, is an open question. The real way you know that the AI girlfriend mistresses will never actually become a threatening force to real wives is because they were trained on Harry Potter fan fiction, right? Like how good of a conversationalist can these gals really be? You don't know if you have Harry Potter fans in your life who are skipy at secret. But the other thing is like, it's not just that, right? I mean, it is the common crawl is everything. And again, we could tune it that way. Like the thing is these models then go additional tuning. So if I change the tuning parameters to you keep the guy talking as long as possible, like the system will accomplish that goal, right? So if they, if like open AI, for example, loses the suit and New York times wins, what does open AI end up doing? And how do they stuff the toothpaste back in the tube, so to speak, when the model has already been trained on this common crawl data? There's a whole bunch of open questions. I mean, first of all, they could just say GPT-5 is trained at all legal data. We'll shut down GPT 3.5 or we'll license it or we'll engage in continual fights for years. Japan allows unlimited training. There could be a lobbying effort to do like, there's a lot of like from a legal perspective, right? There is kind of a race to the bottom already kind of happening on IP rights, at least on the training data side. Outputs we don't know as much. Is that kind of a good, is that maybe a good and necessary thing? I mean, when I think about IP and how it's changed, it's had to adapt to the reality of the internet. I mean, as someone who's produced a lot of content for YouTube and seen how fair use has morphed in ways expanded over the past decade, I think that's a good thing. If the purpose of IP is to promote the useful arts, i.e. create, generate more creative arts, it seems like it's obviously these companies are protecting their turf, which they have all the incentive to do, but that's not necessarily the purpose of IP. It's more to generate more creative stuff. Right, by rewarding people who created this stuff in the first place, right? So, I mean, part of it is giving you some exclusivity, right? That's the, I mean, and again, I don't know on your personal libertarian sliding skills, but most people would agree, like patents end up being a necessary thing you have to have to encourage development or there's people copy the outcomes right away. I think the argument- We're going to get at least one listener who's like really angry about patent, like our stances on patents or something like this, right? Like this is an interesting area because it's very divisive for libertarians. And I totally get it, yeah. Yeah, and Pat, you know, Pat, I'll just mention to try to forestall that angry listener, like that patent trolls are a real problem. So it's not like every patent that's issued is a good thing. All these systems are, yeah. They're all abusable systems, right? So to point at the downside abuse, right? Which is like patent trolls, fair use trolls, like those things suck, right? It's part of having a system that is crude, right? Because that's the way it is. But we also know that without patents, you wouldn't have drug development, right? In the same way that you do. And again, I'm not here to have arguments over where the line of regulation should be. But the point is, is that if we're trying to incur, if the goal of these policies is to encourage people to develop product, right? And then that product then can be homogenized and generated by AI. The current system would say we should compensate the people whose work is being used to build these things. But it also would slow down development of AI systems, right? So if you're not worried about your books being taken and used, and I don't actually mind that my books are inside the system right now because you can't produce an exact book. But what if you can? What if the training on the New York Times data which New York didn't pay for, New York Times didn't pay for, lets Google has been testing AI journalists, right? What if that training data lets the AI journalists write New York Times quality pieces because it trained on New York Times stuff? It becomes a very hard set of questions to answer. And it's one of many, many that we don't have good answers to right now. What about the quality of the AI generated work? Like, are we in danger of it? If it's kind of just everything, more and more stuff becomes AI generated. Is it gonna just kind of start training on itself and becoming repetitive? I mean, I saw this study that I think it might be mentioned in your book that shows these adjectives showing up over and over again in peer reviewed papers, which we've got commendable, innovative, meticulous, intricate, notable, versatile. And you see a huge uptick in all these words around 2023, which the study's authors infer is because just people are using AI to write their papers. And the upshot there is that there's a lot less originality in language. Is that like a wider danger that we face when we think about originality and creativity and how it works with AI? Yeah, so there's a few interesting things to pick apart there. One of those is, let's talk about the bundle of tasks. Most academics are terrible writers. A lot of them are terrible writers in English, a language-reforced academics all over the world to use. I read academic papers all the time. I'm pretty unusual. I didn't have a ghost writer for my book. I write all my own stuff, but most people struggle at it, and they should. I don't need you to be a brilliant physicist and a brilliant writer. So to me, seeing that 40% of people are using the word meticulous, go for it. Run your stuff through AI as long as your research is good. I don't really have a problem with that kind of approach. Now, on homogenization, I think that that's a legitimate concern. We've been doing some experiments on homogenization. We actually find good prompting results in less homogenous output than groups of people producing the same content. But it's a kind of open question, right? If you've got one world brain, you're running everything through, then you're kind of getting very similar kinds of output. So I do have concerns about that kind of piece. I think, again, a lot of this is unsophisticated, chat, GPD 3.5, punch up my grammar on this kind of stuff. Then we go to the question you asked me was, I mean, I do a lot of these interviews, right? I thought that was a really well-phrased version. The, hey, Jen, completely AI-generated question you asked me was a really good way to ask that particular question I've been asked before. So it's a hard thing to know, right? Part of the other question that is forget just homogenization overall, what if we have one good podcaster, some mega podcaster, right? We have like Mark Merrin or someone run 10,000 podcasts because all he has to do is hit a bunch of buttons and he could check instantly on this. Is that a good thing or a bad thing? When you have a little bit of human loop, we don't know any of this stuff. Yeah, after all this praise that you're heaping on my digital clone, I'm definitely running all my questions through chat, GPD 3.5, I'm being transparent with the audience and my employer in the spirit of this conversation. Go ahead, Liz, I got a question. Yeah, so one thing I've been mulling is this idea of semantic change, the evolution of word usage, how there are some terms that we used to use 100 or 200 years ago that have sort of morphed in their meaning and they mean something different today than what they originally meant. There are a gazillion examples of this. With hallucinations of facts by chat GPD and the facts that we're going to have to become increasingly careful to spot some of those errors, will we end up having a world in which some basic historical facts and details sort of become permanently warped? Like this idea of like Martin Luther nailing the 95 theses to the church door actually becomes Martin Luther had 700 theses. Will we have things like that that are just these facts that sort of get forever lost because of these hallucinations that really warped what we know to be true? I mean, I think it happens with both hallucinations and training data. So like as a teacher, one of the things that were, you know, it's not only says pedagogy, one of the things that worries me the most is there's an incredibly common myth of learning styles that people learn in different ways, that they're audio, visual learners, like 90% of teachers believe that it is not only wrong, it's actually actively bad. People learn from different mixes of learning. If you teach it a preferred style, they think they learn more, but they actually learn less. Because it was just- Could you just write that down for me so I can look at it? Yes, exactly, I understand it verbally. But, you know, chat GBT loves to talk about learning styles. Like we have to remind it when we do teaching applications, don't talk about learning styles because it learns the consensus for you, right? So there is this kind of concern of like, okay, can you deep, like I'm talking to historians all the time we're using this for deep research. But, you know, it is this degree of like, it pulls back a definitive answer to you, not the variance of like, okay, I go on wiki, you know, I go on online and I'll see the crazy person answering, no, there was no theses after all and there were 800, I can reach my own conclusion. The AIs, we're not sourcing some of that to the AI. So I think that between hallucination and training data, there is the risk of some stuff getting lost. I think we're going back to the best of all with human standard though, is it better or worse than people doing this stuff? How, so I have the sort of most cliche mom slash lady question to ask, but I'm very worried because I frequently have some slightly psycho people, you know, on Twitter doing and saying various, very sexual things like any lady in a position in media or who does frequent TV hits has, you know, there's always some, some sicko pervs out there. Will we soon be existing in the world of like massively pervasive deep fake porn where even like the 10 year old girls have these like very pervy dudes in their middle school classes making these really like untoward unacceptable videos of them and being able to be enabled to a far greater degree than ever before. Like paint this doomsday scenario, flush it out for me a little bit. And like, am I wrong to have my head go to this very dark place? I think that that is an obvious huge downside problem that we're going to deal with. And the problem is that even if there's a lot of open source tools being released, it turns out not to be very hard problem to create deep fakes as you saw from like Zach's thing. I am deeply concerned about this, right? I think a lot of people worry about the politics and the political side, but in some ways I worry about that less because people already, you can put a clip of your favorite politician up or at least favorite and say, I can't believe they said it's time to murder all the babies and no one's going to even watch the clip, right? They're just going to get mad about it or believe it or not. But I deeply, well, it's like, I can totally believe they said that. I can deeply, but I am deeply worried about what you said, which is the individual level of like, you know, like having the- And I think about like, what would happen if somebody decided to do that of me? Okay, that's at least fine. Cause I to some degree, I'm an adult who consented to having a career where my likeness is on the internet and on YouTube and all of these things, but I'm really, really worried about what happens when like minor children, little teenage girls or preteen girls have this type of thing happen to them. I feel more capable of being able to deal with that and more used to being able to deal with, you know, internet harassment. But I'm actually very worried about how this might warp the brain of a 10 year old. I think you should be. I mean, I'm very worried about that myself. I think that is one of the obvious downside consequences that sometimes get blurred out in this worry about like job collapse and everything else. Like you have a tool, like, you know, and also this is where the guardrails come in. People talk about sort of, I want uncensored AI. If you read the GPT-4 tetanol white paper, it outlines a whole bunch of things the AI could do before and after guardrails. One of them was like, create graphic rape threats en masse for somebody. Do, you know, tell me how to kill the most people for a dollar. You know, like all of us kind of like, right, you know, insulting, you know, tell me how to insult this, this group of people without triggering content warnings. And it was horrifying. Like you can read the answers. They're horrifying, right? So we end up in this kind of situation, which is like, how do we deal with like, you know, I think we all are, you know, feel fairly strongly about free speech, but with minors and targeted sexual harassment, being easy to turn on with a button. And by the way, you didn't consent by being public to have people make deep fakes of you in compromising the difference. And frankly, I was actually, I was working through this idea with Zach recently and I was like, you know what, actually, I think that if that had been on my radar as a distinct possibility for what the future might look like, that would have served as a disincentive to be in this industry or to be in the specific vertical within this industry. Just because like, sorry, I'm squeamish about it. Like I just don't want that. That does not, it's just not aligned with my value. To your point, there was already a story out of Florida that we were talking about where some high schoolers were doing this to one of those. Works, middle schoolers. Middle schoolers. But it's trivial and middle schoolers are middle schoolers. It's obviously going to happen, right? Like it's already happening. It's going to keep happening. I think this is part of our general idea that we have not adjusted very well to sort of unlock, you know, I think kids in online spaces already dangerous, kids in online tools already, like we haven't figured it out yet, right? We sort of let everything fly. And I think as parents, we have to figure out what we're doing. But I am worried about that. I don't have an answer as the problem. Like there isn't a tool like because a lot of this stuff is open source or run sketchily or, you know, to the extent that it's a real thing, the dark web. Like people are going to be doing this kind of stuff. Do we end up desensitized to it? So it's just like, oh, we know it's fake and everyone just has this actual content made of them. Of course they do. And that feels like a horrible world. But like, I don't- That is kind of what I think the most realistic world is though, right? Because also, I mean, consider like what possible corollaries we have to this right now, right? Like there are, if you're a really high-profile public person and you want to go searching for it, you can find all kinds of lurid, disgusting things that people have said about you, right? And so to some degree, it's a matter of assuming that that's kind of baked in, that's a given, and then trying to sort of go backwards from here. I guess the problem is, you know, it can be such a compromising thing for one's image to have like what happens if you have a really, really realistic deep fake porn of you circulating out there and people get into this really tough spot of trying to figure out, okay, what is real? What is not? Or will there be any tells that can really help us to preserve our images in a sense of propriety? I don't think we're gonna have those tells. I mean, I really think that this is one of the, one of the sort of things that I think a lot of people are not thinking enough about and I think it's good that you raise the issue. I don't have an easy answer to it, right? I mean, I think, you know, I think that this is a case where punishment in schools and other kinds of approaches might be the way to go. Like it is very hard, you know, consent and not, like this keeps coming up again and again, which I think is appropriate for the kind of, like what is consent mean in a world where people can, like it's only a digital you, you put the picture of you up online, what I do with it's my, my problem, it's not yours, why should this be an issue? I mean, I think it's really troubling and I think we're gonna confront that very quickly and already probably are. I mean, the tools are out there for doing this stuff, right? This ensures that somebody who really hates and is absolutely going to make this like next week, right? That's depressing, right? I mean, I don't have a, I don't have a non-depressing answer to this. Like there's gonna be good and bad from AI and one of the obvious, there's a bunch of very obvious bads and this is one of them, right? This one and much less, you know, squeamish, but a large impact will also come from like targeted phishing campaigns, like your security security is about to become a nightmare everywhere. Anywhere where people interact with each other, we now have a tool that makes it potentially worse. Scams of that. I've already warned parents not to give me money if I call them asking for them to wire me money somewhere because, you know. You gotta have a password that is not actually guessable by anything about yourself. So we're in for a weird world and I think some of the effects on people is gonna be pretty bad and pretty terrifying. Well, just, but it's not, you know, I wouldn't, I don't want this to be a case where you are like, yeah, if I want to appear public as a woman, I have to be ready for targeted, you know, extremely visceral reaction, you know, and organization, no way to stop it. It feels like a really bad outcome. I mean, the solution is to just make a ton of deep fake porn of men so that really both genders are equally harassed and so there's no disincentive, no particular disincentive if you're of one gender perspective. You've got a plan, I'll let you execute on your own strategy. I think the word in your answer, consent is really important to linger on, especially for this podcast. Like when you're thinking about, you know, the kinds of regulations or laws that like, you know, libertarians would get behind, it's something that maximizes individual consent. And so when, even when you're thinking about like, the issue we raised earlier of like, can you license your image to a corporation to then use in different ways? I would say yes, but you can't, I would argue that licensing it in perpetuity for any purpose whatsoever, that is not something that should even be on the table because consent always needs to be able to be revoked. And so I think that's something worth thinking about. And you know, yeah, Ethan. No, I'm agreeing with that. I mean, I think that the whole nature of what this means matters a lot, right? And in a digital world where I can do digital things to you without consent is a tough one, right? And it's a tough one. And the problem is that the rules that will stop it from happening are things that, you know, a libertarian probably wouldn't want, right? Like you need to have restrictions because we can't stop creation. So that means, you know, more take down notices. Like does that mean, what do we do? You know, what are the rules for platforms in terms of what they allow and what they don't allow? How much do you have to spend your time like fighting? You know, do you have to show proof that this was not an image of you? Like we start to be in a very uncomfortable world where, you know, how do we deal with this kind of, when this is going to happen, right? Suddenly it becomes, well, maybe the best place to be our walled garden sites, like a Facebook or something where there's at least some real name and incentive attached to it. I mean, it's a very hard problem to solve. Yeah. That kind of brings me to the last section, which is, you know, what you lay out a few different futures that, a few different possible futures for AI. Because when we think about, you know, regulation or building these guardrails around AI, one open question is like, is that even possible? Or is the genie already out of the bottle? And some of the, you lay out, as far as I can tell, four different scenarios. Good as it gets is one where this is it, this is as far as AI advances, largely because regulation kind of stifles much further innovation. Or we're already just, you know, there's fast learning and the learning curve is already, you know, slowing down. Next is slow growth, where we just get a slow, steady improvement on AI. Then you have exponential growth and then machine God. Which one of these do you think is most likely? So, I think we're heading, so I want to make it clear, like open AI's explicit goal is AGI. I mean, it is machine God. Machine God. We should be paying attention to the fact that they care, that that's what they believe. Like, I don't know if I believe that. I think we're much more likely to have exponential growth for the next year or two and then maybe slowing down to linear, but literally nobody knows the answer. I talk to people training these systems all the time. Nobody has a decisive answer to this, right? Like there's people who think we're near the top. There's people who think that, you know, we've got a lot further to go. And there's people who think we're going to be building a more super intelligent creature in the next five. I mean, the consensus on the betting sites is, you know, within five years is right. And, you know, five to seven years is the, is their prediction for AGI. I mean, that's in planning horizons for anyone who's doing any seriously with their world. I think that we are underestimating what exponential means, right? If it's at the eighth percent of BCG consultants or doctors, the question you need to be thinking about is, or 80th percent of podcast host, is the 85th percent down next year, 90th, 95th? Does it ever get to a hundred and fifth? And we don't know the answer. So I think the reason I've talked about scenarios, I think we have to think in scenarios. I think that as good as it gets is very unlikely, right? I think that, but we don't know what happens afterwards. Could you paint scenarios for what AI doomsday looks like? One of my great pet peeves that I hope you serve as a corrective to, Ethan, is how people talk about this open question of will AI kill us all? And they kind of forget to fill in some of the blanks of like, well, how exactly, like what is the mechanism by which that would happen? And I think there's a lot of different ways that people envision AI serving as this existential threat. What are things that you envision? And what's the sort of likelihood of these scenarios? Like what things have to be satisfied? What conditions must be satisfied in order for these things to actually happen? Because at least to me, the AI will kill us all just kind of feels frequently like generic boilerplate doomsday scenario, but it doesn't actually feel specific enough for me to have a worry that seeps deep into my bones. I think that's fair. So I think that the near term doomsday that people are genuinely worried about is if it's the, like we depend on criminals and terrorists being dumb, right? Like mostly that they are, or at least the ones we catch are dumb, right? And so something that elevates everybody, the eighth percentile performance or 99 percentile is a big deal. And I think that the fantasy that isn't inaccurate to me or that doesn't seem completely made up is the idea that, you know, if this thing can help you, it's fairly trivial to engineer a virus if you talk to a virologist right now. Like it's not a hard problem. It's just not that many people know how to do it. Could the AI be or help you do that kind of thing? Could it elevate the ability of people to do that kind of work? There's a nice set of papers at a Carnegie Mellon that gave AI control over a bunch of lab equipment and was able to start synthesizing chemicals, right? So does it lower the barrier for bad action? Cause it doesn't take a lot of bad actions to make the world worse, right? So I think that that piece is the kind of near term. If you're talking about like existential worry, it's not so much the AI wakes up and decides to murder us all. Now people say, oh, Google can do this. Google can do it up to a point. So right now it's not better than Google, right? And there was actually a really interesting experiment that OpenAI did where they actually had researchers at MIT try and use this to build a virus, right? And see how far they got. And we're not there yet. But I think that's a legitimate set of concerns is there's a bunch of mass destruction techniques that are actually not that hard to do that depend on specialization will this change that, right? And I've got a smile on my face, but like that's an anxiety-producing one that I think serious people are worried about. The further scenario is, you know, the far out one is the AI reaches smarter than human intelligence, ASI, artificial superintelligence. And then, you know, there's some infinite level of smarts out there where it just does stuff. And we don't even know, you know, manipulates us to do things. There was some nice early pay. There was early, another thing in the GBD4 technical report where GBD4 was able to actually hire task graviders and pretend to be a human and have them do tasks for it. So there's a version of this world where the AI, you know, in the far future, manipulates a bunch of humans to make sure that, you know, anyone who could turn it off gets murdered, right? So this feels more like science fiction. But, you know, some of the people are genuinely worried about this. So I think the near-term existential catastrophic threat is making incompetent people hyper-competent who are bad, right? And the further one is AGI, what does it want to do with us? And is it sentient or not? And that's harder for us to grasp. So do you think, how likely do you think these scenarios are? Because as I see it, I'm actually not nearly as concerned about that type of thing. I know people also frequently cite basically defense capabilities. And like AI-enabled defense and the fact that like bombs could be dropped essentially by either sentient AI or AI used by humans to engage in acts of war. So that's another thing that I also want to add here. But none of these scenarios worry me all that much. I think the thing that worries me more is the sort of the many banal ways that our world could be made much worse. How do you look at this? Like, you know, if you had to instruct people to be concerned about a specific area of AI, where would you say people should concentrate their worry or, you know, possibly like their calls for regulation? Yeah, I mean, to me that you're nailing it with exactly that kind of question, which is we talked already about involuntary pornography and about large-scale harassment. I mean, I think that is baked in already as problems we're gonna have to deal with. So we have to think about what the policy solutions to those things are because it's not gonna be solved, right? Otherwise with the current systems we have. I think that that is a baseline concern that I'd be worried about. I think that there are security concerns that they would have about, you know, not just natural security, but like, what does it mean when we've got deep fakes of people's voices going out and it breaks our entire authentication systems, right? In a very deep kind of way that I'd worry about there. I worry about people forming relationships or being catfished by AI relationships, I think are something we need to be thinking about. So those things are, we don't need more advanced technology for any of those things to happen, right? Those are all- We should also be worried about me wasting too much time on the replica Reddit forum, right? Like there's, I'm fascinated by essentially the movie Her Come to Life and I'm fascinated by this idea that, you know, what happens when these human relationships are replaced by synthetic and all those flying versions? That to me feels like the most likely apocalypse honestly is the first scenario, right? Like an apocalypse of connection, like this, we're already, you know, sort of turning inward and turning away from other people. And so what happens if this like loneliness and isolation phenomenon is terribly exacerbated? I agree. I think, I mean, the hope is a lot of these things turn out not to be issues, right? It turns out like, okay, you know, people have an AI companion, but they also go out and touch grass more often than they used to. Like that's a completely possible outcome. We just don't know. I think we sort of hoped, we thought that social media would work out better than it did. Like I certainly had better hopes. I thought, you know, early days of the internet, like, okay, between Wikipedia and social media, everyone's gonna be wise and nice to each other and the global connection was always what's gonna bring people together. We were wrong about that, right? So we don't know what's gonna happen here, but I think the connection piece, especially on top of social media and having addictive personalities that you could talk to that like you and respect you and could pretend to be anyone you want, that feels like a genuine kind of issue, right? And I think that that is something that I am concerned about. And I think the her scenario of, you know, it's not that hard. We've learned it's not that hard to tune an AI to be something you really like, right? But is this better for some segment of the population, right? Like I am lucky enough to have an actual family that I speak to every single day, but there are many people who just never really had that happen for them, either by fault of their own or truly by no fault of their own. And so for them to be able to turn to something that serves as this bandaid, that serves as this anesthetic for the pain of life. I mean, the Catholic in me is, you know, vehemently opposed to that. And yet the libertarian in me says, well, that's a better alternative than what they were otherwise doing. What do you make of that argument? Or do you think that the fact that this just makes that so much easier, this removes the barrier to that is a problem? So there's an early paper on replica that looked at 90 sort of deeply lonely people using replica in colleges. And they found that for those people at least, that like a significant percentage said, like six or 7% said it stopped a suicide attempt for them. And almost all of them, many more reported that they now talk to people more often rather than now that they have the AI as their backstop. So we just don't know, we've never had another kind of intelligence or personality in the room. Like we don't know where there turns into her, everyone muttering on their phones and instead you have an AI you can find in, but like then you're excited to go in and talk to people again. We don't know if this is the backstop to mental health or causes worth conditions, probably all of the above. Which again is this other place where regulation, policy or at least like if we just aim for addictiveness, I worry about the outcome, right? There has to be some better alternative than that. Let me lean into the uncertainty of the future to wrap this up for us and ask you about what makes you most optimistic about our AI infused future. Great, I mean, there's so much. Like work generally sucks, right? Like there's a lot of things that suck about work, but like people report on being horribly bored at work one quarter of the time, right? That's terrible. Why are we doing that kind of thing? There is a whole bunch of intractable problems that are hard to solve with machines, but that humans actually work pretty well with, right? That there's a lot, we've slowing down in scientific discovery where doctors are overwhelmed. Like a lot of the most important professionals have 4,000 other tasks doing their job that doesn't let them do the most exciting thing. I think as a piece of liberation, of excitement, of a way of unlocking human potential, there's a lot there that I think we should be leaning into, but it's not just gonna happen spontaneously. And the AI companies don't know how to solve this problem. They have no idea about any of this. They're building better models. So part of my challenge out to all the listeners and watchers out there is like using this stuff to model good behavior as part of how we get out of this trap, right? It's like, show me the tutor that does a better job teaching. Show me the way of, you know, that you would make this better for people and then things get better. It's a tool in many people's hands. In childhood development, there's these critical periods, you know, where there's this mass, this higher rate of developing skill during certain sensitive periods. I might have slightly butchered that probably because chat GPT isn't really teaching me enough about my toddler's brain functioning. But would you say that there's a similar issue present with adoption of chat GPT? Where like when we adopted forms of social media, we sort of got these bad habits locked in place that have resulted in, you know, arguably, as you said before, a not so great world of social media is are we in a weird sensitive period for the development of how we use AI where we need to be fostering certain good habits or using it in certain ways to stave off the horrors that could come? Yeah, I mean, I think we have to model this behavior, right? We have to make the world we wanna make. And there's a remarkable agency at this point in time because whatever industry room, whatever job you're in, I mean, you're probably the leading of how you use this in podcasting. So what's the positive example that makes podcasts better and makes your lives better and lets you do more with like, and I think you're exactly right. One of the things I've noticed by being early on the AI stage is how much things I do get adopted by other people. It's a crazy point where it's like, I see the language I use being other places. The prompts we're doing, like people are referring to, like there is a moment of influence here that I think is very empowering. And if you, I think a lot of like people, especially kind of in the libertarian space, think about like the heroic individual who has ability to kind of make things happen at this moment. Like this is that moment for a lot of you at this stage. Like this is the moment to model something good. Nobody has answers. There's no instruction manual. There's no systems to restrain it. This is a really interesting time. I agree and I wanna thank you Ethan Mollick. I'm gonna throw it back to my AI clone to wrap us up now. A huge thank you to Professor Ethan Mollick for joining us today and sharing his invaluable insights on artificial intelligence. I'd also like to extend my gratitude to my brilliant co-host Liz Wolfe for her thought provoking questions and contributions to the discussion. To our listeners, we're eager to hear from you. If you have questions you'd like us to explore or topics you're curious about, please don't hesitate to reach out. Send your suggestions to justaskingquestionsatreason.com and we might just feature your question in our next episode. Thank you for tuning in and remember the future is intelligent and so are you. Stay curious and keep asking the smart questions. Until next time, this is Zach Weismueller's digital AI clone signing off. Thanks for listening to Just Asking Questions. These conversations appear on Reason's YouTube channel and the Just Asking Questions podcast feed every Thursday. Subscribe wherever you get your podcasts and please rate and review the show.