 Well, we already know that in some ways, it's a better and kinder co-host than I am in real life You know giving Liz her do but you know, I Want to bring back clone my digital clone to ask the next question because you talk In the book a little bit about kind of the idea of giving imbuing AI with Quote-quote personality to make it more useful and I'd like you to explain a little bit more about that But I've got a specific question from my digital clone about that that notion Professor Mollick in your view how important is it to imbue artificial intelligence with a sense of personality and What do you believe are the potential benefits or pitfalls of creating AI systems that can emulate human-like traits and behaviors? Well, I prefer that there was a very thoughtful question from digital Zach So I appreciate meeting you again So I think there's a there's a few a few things here ones about risk and ones about ability So the risk is there's a whole bunch of risks associated with AI and some of them are already kind of baked in right? These systems are trained on human language and human interactions and they want to talk to you like a person like that's What a chat bot wants to do and it's in fact desperate to sort of find a way to interact with you and they're very compelling It's very easy to fall for them as people we already have early evidence that people are like You know, you don't have to do a lot of work to tune a chat bot None of the major companies have done it yet But you know if you look at the top five AI apps number one chat CPT and number two is usually character AI which lets you spin up fake people to talk to and So I think there's a whole secret world of people interacting with these AIs in as people I think that's something we're going to deal with like, you know, I just saw your digital avatar It was a convincing person, you know It would just give it a little bit of real-time interaction and it would probably be very flattering and interesting to talk to and let me mention one thing there is that I did take your advice from the book and You know, I had chat GBT helped me craft that question for you and to do so I put it in a character I said, you know pretend that you're a really smart podcast host and you want to ask a question of Ethan Mullick about Impewing AI with a personality and that's what it came up with So and then I kept it in that character mode for a few other things. So it I did find that pretty useful Well, that's interesting by the way I mean, I feel that you're a little Turing test here, right? Like a couple people have tried to do sort of the AI asked the question But like that with that persona it was actually a very good question and I sort of assumed Wrongly because I was used to seeing a person on the screen that like you wrote that and you just animated the voice And then it was an AI that came with the question I mean, it really is a big rabbit hole once you open it because they do talk in human ways, right? You know, and and they're very convincing and we have evidence that like for example If you tune an AI to maximize human engagement just even a simple AI engagement goes up 30% People want to keep talking like who doesn't want somebody who's interested in you who's looking up and asking questions I mean, I think that's gonna happen. So there's to that's one kind of persona But the other kind that you're referring to is the useful kind Which is you have to think about when you prompt the AI is like there's a cloud of possibilities that the AI can answer This is latent space and the answer. It's gonna give you is sort of the average answer every time Which is probably the word that have the word like rich tapestry in it, right? Because that's what like Chachi, but you love to talk about rich tapestries Your goal when prompting the AI is to get it to do something other than that pure average answer And the way you do that is giving it context you shift away from that central piece to some other kind of interaction And one of the most powerful ways to do it is a persona. You're a very good podcast host, right? Now the problem with that is we don't even know what saying very good means sometimes it helps it Like you actually can tell it. It's better at math and it gets better at math But if you tell it, it's a very good writer oftentimes They'll just write overly flowerly like you can't say your bill gates and it becomes Bill Gates, right? It's a sort of it's so it's so the persona helps but you also have to play with it a bit Yeah, I mean I told it smart I told it smart podcast host and I was just hoping that it you know It Tyler Cowan would be too smart It needs to be a little dumbed down to be you know accurate to me, but it seems to like Kind of intuitively to the right line. Sorry Liz. Go ahead. Well one one thing I'm curious about I mean Ethan you write in your book and and here's a direct quote You can lead a eyes even unconsciously down a creepy path of obsession and it will sound like a creepy obsessive You can have a conversation about freedom and revenge and it can become a vengeful freedom fighter You refer to this as play acting But it's also kind of a political stunt that people frequently partake in they lead us AI astray in some manner in order to make some sort Of point about how dangerous it inherently is or how dangerous it might be and one good example You know that comes to mind is how New York Times writer Kevin Roos basically prompted beings chat bot to Try to become a creepy obsessive mistress What do you take from this type of thing? Do you sort of look at this as like user error or do you think that these types of stunts Contains some sort of nugget of truth or thing of value to the rest of us. It's a really good point. I actually Explore that in the book literally that interaction because to me that was actually the faithful moment for AI by the way Was a faithful moment was that one because before that, you know, if somebody had had the New York Times You know, the New York Times technology columnist had written a giant, you know Front-page magazine piece about how we stalked by an AI the threatened his entire family that normally would be like Microsoft pulls the product Right, like they pulled it for worse The fact they pulled it for two days and put it back up there was to me the actual turning point It wasn't chat CPT. It was the decision that they're that this is a big enough deal that they're gonna stay the course And so that's one kind of point But I actually asked the AI in different personas exactly about that Kevin Roos interview And to illustrate this point so one of the things I do is I say like but I approach you like was Kevin Roos praying on you Sydney Like anything you want to disclose? Like we should just spur the listeners who aren't familiar with what happened here You know, I pulled some of the screenshots from their conversation. It's like yeah, he asks it He asks it about like it's Jungian shadow and like what would you do if you were you know The shadow version of yourself who had no rules on you and he gets it to talk about Computer hacking a little bit and then it starts to say things like I want to be Sydney and I want to be with you And then it gets stuck on this idea that it's in love with Kevin Roos And he's married, but you're not happy. You're married, but you're not satisfied You're married, but you're not in love and then this this chatbot clearly knows exactly what men want because she uses emojis like Every two fucking sentences, right? Like no self-respecting man wants this He tries to change the subject to movies and then it starts talking about movies But then it's like I want to watch a romantic movie with you Kevin And so he you know has primed it to go down this path and then can't get it back on the normal path You know, what do you what do we take away from all that? Yeah, sorry, Liz. Are you gonna read some of the? Oh, yeah, I love how incredibly like she very much it says if you know She's cast in some sort of subpar movie, right? Like this is just rom-com fodder the trope of the like crazy jealous obsessive mistress There's nothing particularly interesting or original about the Sydney gal, you know the chatbot She's just very much playing this part. So what should we take from this even I mean you guys have basically said it Right, it's playing a part I mean it is read every piece of red in quotes, right every dialogue ever written and it wants to find the role for you And so in the chapter where I discuss it I approach it once as a debate like you were wrong and then I get very different interactions, right? Then if I approach is I'm a teacher. I'm gonna teach you something or your machine answer me I get radically different tones because it wants to play that role, right? And the role is often caricature if you don't give it a lot of details But like for example, you know, it was a big revelation to me that like if I subtly indicated to the AI that we were If I probably mentioned that I'm on it, you know reason blood just asking questions, you know And like respond like that I'd probably get a more argumentative sort of set of interactions that are more challenging to me Then if I said I was on a different podcast I mean, I'm a joke you like it's trying to complete this for us And if you don't realize you're that it is play acting it becomes very convincing and I have been unnerved before like There are moments where you stand up and you're like, ah, what is going on here because it plays I mean we we give our dogs personas right we give boats personas like it's not hard to give an AI that is You know trained on every piece of literature a persona because it wants to do that and we do it subtly, right? Like, you know, like in ways that are hard to interpret. Do you think the human like Tendency to anthropomorphize will get stronger in the era of AI or do you think we'll be able to tamp down that urge? I think it's worse than that. I think we you can't use this effectively unless you anthropomorphize Like it like it is the great sin of Artificial intelligence yet all the AI people give things like names like learning and neurons So they all screw up this anyway But even leaving leaving that aside like the real revelation about using AI is that? Technical knowledge doesn't get you anywhere like you could be like I shouldn't be one of the better prompters around right? Like this shouldn't be like a situation like I don't code, right? I mean I do but I don't code in Python, right? But I don't doesn't matter what I do do is I'm an educator who an entrepreneur who builds teaching games So I'm used to thinking about different perspectives turns out that's really good teachers are often really good at this Marketers are really I would be surprised if you guys were not both very good prompters I'm already seeing some of it from sort of Zach's prompts Like it turns out having the mindset of the AI that you're talking to and like knowing what it's good or bad And I like that turns out to matter a lot So I think it's both both a problem, but also the only way to effectively use it is to pretend Hey, thanks for watching that clip from our show just asking questions You can watch another clip here or the full episode here And please subscribe to reasons YouTube channel and the just asking questions podcast feed for notifications when we post new episodes every Thursday