 Hello, Kate. How are you doing today? I'm pretty good. Chris, how are you? I am fantastic. And I'm so excited to talk to you. So for those who haven't met you yet. So what what's a little bit of your background? Like I know who you are, but let the people know. Okay, so right now I'm a research specialist at MIT. I study human robot interaction from a social, legal and ethical perspective. But I have a whole background in law and social sciences. And so I wound up in technology kind of by accident, but I love it. That's actually really, really interesting. So here's a fun quick story for you, Kate. When when I was growing up, my dream was to go to MIT as a kid. And then I kind of realized I didn't have the grades to do it. So so if you want, you could tell me how bad MIT is. So I don't feel bad about it anymore. Terrible. You wouldn't want to go there. You know, food is really bad. Yeah, just awful in the winters there. Oh, yeah, that's true. The winters really are terrible. So so with the book, so we're talking about the new breed. And yeah, I saw I saw the author Joanna pin tweet about it. I'm like, I'm not really interested in books like on robotics and AI and stuff. And then I realized that, you know, it's kind of like from this ethical standpoint and stuff like diving into so many topics. So can you kind of give like a brief overview of like, the concept of the book and what made you want to write this? And even if you want to kind of dive into the aspect of what the new breed is and because you talk a lot about animals in there. So animal lovers need to pay attention. Yeah, yeah, I I tomb an animal lover as I'm sure you could tell from the book. So okay, but what made me want to write this book was not animals. What made me really need to write this book was, I have had so many conversations with people over the past decade about robots, because I love robots. So I will talk to anyone about robots. And I just started realizing that no matter who I'm talking to, whether it's a robot assist, or you know, some random person at Garden Party, all of our conversations around robots seem to be comparing robots to humans and artificial intelligence to human intelligence. So whether we're talking about robots replacing jobs one to one, whether it's in our stock photo images, which are always images of kind of humanoid robots. Yeah, it's total, like, we constantly do this comparison. And I was just like, that doesn't make a lot of sense. It doesn't make sense because artificial intelligence doesn't function like human intelligence. It was the original goal of like the original people developing AI to probably recreate human intelligence. But that's not where we've ended up. And that's not the path that we're on. So computers are already way smarter than us in some ways, and way not smarter than us in other ways. So it just I feel like this comparison is limiting us because it's it's leading to this technological determinism where we're thinking that robots can, should and will replace people. And instead, I think our goal shouldn't be to recreate human abilities. Our goal should be to create something different, supplemental that we can partner with. And so I've always found animals to be such a good analogy to robots because they are these things that can sense and think and make autonomous decisions and learn. And we've partnered with them throughout history, because their skill sets are different from ours. Not because they do what we do, right? So, so I, you know, I eventually decided that there was enough in this analogy to write a whole book on it. I could have written even more on it. So many cool parallels. But anyway, the point isn't that animals and robots are the same. The point is that I want people to open up their minds to a new analogy and change some of the conversations and fundamental ways. Yeah. And, and, yeah, and not not only do I typically not read books about like robots and AI and stuff like that, but I like history books like bore me, but you have so many stories in there about like, dogs being used in war and like pigeons and dolphins and like, I'm like, this is really neat stuff. Like you have a lot of knowledge up in that brain. And one of the first things I want to talk about because it's something I didn't even consider. And now that you've, you've kind of like, got me interested in this subject. How, how do we define what a robot is? I never thought of it. Like, I, you know, I grew up, you know, I was born in the 80s. And I think of like Johnny five from short circuit and stuff like that. But yeah, in the book, like, you start talking about like, like room buzz and like, is this a robot? Or does it need to like look like a human? And what are the what's the functionality? And I think that perspective helps us get into that kind of ethical and moral conversation. So what's a robot cave? Well, there is actually no universal definition of what a robot is, which is weird, because we talk about them all the time. And yet, we don't seem to really have a good definition. So depending on the field that you're in, you'll define robot differently, even the roboticist themselves. For a roboticist, a robot is usually a physical thing. So it can't be AI has to be physical, that can somehow take in information, sense its environment, think about what it's going to do next based on that input, and then act on its environment again. But even that as a definition kind of gets messy, because you could technically say that your iPhone can do all of that, like light and vibration and but but most roboticist would not call this a robot. So the it's it's funny if you look kind of historically to how the definition has changed, because the robot used to be anything that kind of automates a task. And then once it becomes more ubiquitous, it's just a dishwasher. So so it yeah, you know, I, I have kind of the same operating definition as the roboticist. But the point of the book is that we so often just fill in as a definition, a humanoid robot, and don't think about all of the other designs, when we're just envisioning robot, and you can see that like in if you do a Google image search, you see what people envision as a robot. So yeah, so the point of the book isn't to find a definition, but rather to break out of kind of this mold that we have. Yeah, and it is so interesting. And that's why, you know, I, my interest was peaked when I learned what your book was about, because I just really like thinking and looking at things in different ways. And a great way to do that is by getting into ethics and philosophy and morality and stuff like that. And I'm just like, now I'm looking at, you know, like my computer and just different, my car, differently. And I'm like, Oh, well, I can do this, but can I do that? And as we were discussing right before, I'm currently reading the book, robot rights. And, and yeah, you you talk a lot about this in the book. And that's why I kind of love the animal analogy. And I do want to talk about some of the selective morality and selective empathy that you touch on in the book. But, but kind of first off, like, I'm gonna, I'm gonna play the bad guy real quick. Why, why do robot rights matter? Why do ethics for robots matter? Why can't I just go and kick my toaster or punch my car? You know what I mean? Like, like, you know, reading your book, I know, but for all those people out there, like, it's just a robot is a hunk of metal. Like, why does it matter? I know it sounds so science fictiony. And like, oh, you know, and, and in fact, there is a lot of science fiction on it. And there's also a lot of philosophical discussion around kind of, once the robots are sufficiently like us, do they deserve rights? And that's, you know, I view that as you know, a fun and interesting question. But that's not the question that I'm most interested in, which is, even if the robots don't have intelligence or sentience can't suffer, have no feelings, could there be reasons to act, act around them as though they're moral patients or give them some sort of protections? And the that idea comes from the Kantian theory of animal rights. So Kant didn't actually care much about animals or think that they were deserving of any protection, except that he thought that being cruel to animals made you a cruel person. And so his theory of animal rights was all about us and all about our own behavior. And so as you look at a lot of my work has been around life like robots and how people feel when the robots can mimic pain and distress, and how people feel about those robots being quote unquote mistreated. And it turns out that it's very disturbing to us to watch. And so there's this question of, is that not in line with societal values? And could it even desensitize people as the design gets more lifelike to behave violently towards them? And this question, like, I believe in evidence based policy, I think that we would want some evidence that it desensitizes people before we like legislate and, you know, prevent people from kicking robots. But I do think that it's going to become a big discussion as, you know, people start seeing more violence toward very lifelike robots or their kids start seeing that violence. And we just respond to it so instinctively, like it doesn't feel good. Yeah. And, and that's that's something interesting that you discuss in the book and just kind of like how it affects us and, you know, just, you know, yeah, if you if you hurt animals, then you're being a bad person, like that's affecting who you are. And, and you look, yeah, you just had a kid, didn't you? I did. Yeah, I have a 12 week old downstairs. Awesome. I love it. Is that your first? No, second, I have a three and a half year old too. Awesome. So I like, I'm a parent too. I have a 12 year old, so a little bit older. But, but there's something like if I walked into my room and I just let's say I got my son, let's say I got him like a Furby, right? Like he's a gamer. So he has an Xbox. But if I walked in there and he was just like stomping on, you know, this little robot like a Furby, like, would I be like, hmm, that's healthy behavior, you know, like none of us would think that. So I love that you bring up that conversation because it's we wouldn't be okay with our kids doing that because it could translate and we're helping foster this kind of like aggression. And correct me if I'm wrong, but you touch on some studies about things like that in the book, don't you like some kind of psychological research about mistreating robots and kind of how that translates? Yeah, absolutely. And, and you know, the it's funny that you mentioned the like preventing someone from preventing your kid from hurting a Furby, like because a lot of people are like, well, of course, you know, we teach kids not to destroy anything. But I think there are actually two reasons that we would intervene with like something that behaves in a lifelike way, which is like you said, I mean, for the first might be the value of the property and we know destroy things in general. But the second is we might be worried about what that teaches our kids, right? Especially as they're even younger, like my three and a half year old, for example, doesn't distinguish between animals and robots, like them, they're both the same thing. But yeah, there's been there's there's actually this huge really interesting body of research in human robot interaction that shows that people really respond genuinely to some of the cues that robot robots can mimic. So motion and sounds and and I myself have gotten into some of that research as well. It all started when I got this baby dinosaur toy called a pleo. So this is super actually there's one behind me, I think. People listening to the audio can't hear but there's one in the video to check it out. But they it's this cool toy. But if you hold it up by the tail, it starts to like cry and and mimic this very pitiful behavior. And I was like, Oh, this is making me feel bad even though I know exactly how the machine works. And so that got me into some studies on violence towards robots. It started out with a workshop that I did with my friend, how to scousel where we gave people these baby dinosaur robots, and then we had them torture them and destroy them. And like, they were super disturbed by that. And then I did some like actual research at MIT with my colleagues with less cute robots, but trying to look more exactly at that connection between human empathy, and how we respond to these machines. And, you know, our results indicated that there is really a connection between people's natural tendencies for empathy and how they respond to robots. So it's just a small study. But it's part of this like, growing body of research that shows that we treat robots like they're alive, even though we know that they're just machines. Yeah. And, yeah, I was talking, I was like, we could talk about this for hours, because I have so many questions, like, when we when we look at that, and we look at, you know, certain robots are how they look or how they react, right? Something I was thinking about earlier, have you seen, did you watch The Good Place by chance? I did. I love The Good Place. Okay, so there's Janet, right? She's like cyborg, robot, and not a robot, not a robot, not a robot. Thank you. When they when they go to destroy her, her self, like her, her, her defense system is to express human emotions and pain and like make you like freak out and not want to destroy her, right? That's how they programmed her. But like, so here's a here's a question, like, is that do you think that would help with like, treating robots better, right? Like, if I kicked my Roomba, I don't even have a Roomba. But if it's like said, oh, you know, like, do you think that would help train people and get them to kind of think a little bit better about this? I think there's actually, there's either a study or someone did some art project where they made the Roomba Swear or Express Pain. But but I can't remember exactly what it was. Anyway, but like, OK, so it's a really good question, like a design question, like, does that help or hurt if you make the robots able to express pain? And have you seen the show Westworld? Yes, we have a lot of shows in common. Yes, I imagine we would. So, you know, Westworld for anyone who hasn't seen it is, you know, based on this old movie. It's about a theme park where there are these very lifelike robots that look just like people and animals and you can do whatever you want to them. And they will respond just like a human or an animal would. So, you know, with a lot of pain and stuff. And and then, you know, there's this question that these, you know, scenarios raise, which is even if the robot can't feel anything, you know, are people are people who want to go in and then behave violently towards the robots? Is that, you know, what does that say about them? First of all, and then could it desensitize people if they do it a lot? But like the the the the the pain response can go both ways, right? Like to some people, it's really disturbing and they would stop like harming a robot if it says ow. In fact, that's what most of the research shows. And then in some cases, like, I don't know, if you have violent tendencies, you might love that the robot responds that way. And it might even turn out that that's a healthy outlet for violent behavior, right? So you can go beat up a robot and it's very satisfying because it responds in this lifelike way and you don't have to beat up a human. Yeah. And something something I was thinking about as reading your book and like because my background is like in mental health, I'm in recovery from addiction. I used to have severe anger issues and it took a long time to kind of like work through that, right? A lot of therapy and meditation and all sorts of stuff. But now they have like these like break rooms, right? Where you go in and you just smash stuff. And I don't know if you've like seen some of the research and I don't know if it's like peer reviewed or anything, but some of it is showing that it increases people's anger issues and violent behavior. So when I'm thinking about that, because you bring up a great point like, would this be a better outlet, right? But in relation to animals, right? When we're like comparing robots to animals, I think about that because if you know, Jeffrey Dahmer, for instance, torturing animals, he's like, well, at least it's not a human. Here's a healthy outlet, right? And then it kind of progresses. So I'm not a huge fan of the slippery slope argument, but I'm like, yeah, maybe we should just try to be kind and not torture robots and stuff unless it's in a research setting like yours. Yeah, I mean, that's right. Cause we just don't know, right? We don't know, you know, is this just an expression of violent tendencies that we're already there that aren't gonna change based on, but we do know, like you said, like there's definitely a connection between a violence towards animals and then violence towards people. Like that's been established, but we don't know if it can intensify and like create that slippery slope. Like you said, and that's really hard to research. Like, yeah, we get little bits and pieces of it, like the break room stuff that you said, but yeah, like it does seem to be like, well, to be on the safe side, maybe we should just be nice, right? Exactly. Yeah, well, I, you know, I was thinking about too, when we're talking about like just treating robots with empathy or respect and taking care of them, like, you know, think about like the planet, right? Like the planet isn't talking to me and I don't know if it feels unless, you know, you're like in the movie Avatar and it like interacts with you and stuff. But anyways, like I teach my son like, hey, let's not litter, let's be kind of the planet because we live here and, you know, if he treats the planet like crap, chances are he's gonna go to his friend's house, treat that house like crap, because, you know, we're not training that in people. So I think a good default is like, hey, let's just try to be nice to stuff, you know? But now I wanna go on the total reverse. So like something you talked about in the book and that's when I'm like, uh-oh, like you were talking about being a little too empathetic towards robots. Like I think you even mentioned a story where there was like soldiers going back into combat to save robots. And that's why I'm like, okay, so I could see some other arguments against it. So can you talk a little bit about that and some of the research around like caring for robots, maybe a little too much where we're putting ourselves in danger? Yeah, so yeah. So I don't think there's anything inherently wrong with, you know, emotionally relating to robots, developing connections. Like I think that that can even be useful. But yes, like you said, there are obviously some cases where, you know, it's inefficient at best and dangerous at worst. There are these bomb disposal units that have been used in the military for over a decade now and they don't look like anything special. They're not like cute baby dinosaur robots. They're just like sticks on treads and they're even partially remote controlled. So not even fully autonomous, but soldiers become really emotionally attached to them and they will give them names and medals of honor and have these funerals with gun salutes. And you allude to, so this is a story from Peter Singer's book Wired for War where he talks about a soldier actually putting himself in danger to not leave the robot behind. And you can imagine more of that happening as, you know, different robots get deployed in that context because it's a very emotionally charged context. And soldiers tend to emotionally attach to, you know, each other and their weapons and, you know, everything in that context. So it's not even something that you can necessarily prevent, but it is something to be aware of because right now we're just putting robots out there and not really thinking about the effect that they can create because people treat them differently than other devices. With the soldiers, I used to argue that we need to like find a way to prevent this from happening, right? Because you don't want people risking their lives. I've kind of come around, now that I've done so much research on animal history, I've come around to understanding that there might also be a benefit to soldiers emotionally attaching to the robots. If you look at how animals were used in war and the emotional attachments that soldiers had the animals that they were on the battlefield with, that was actually in a lot of cases like even life-saving for them, just this emotional attachment, this solidarity with an animal. So, you know, I don't know where it shakes out, but it actually, I don't think it matters too much because it's not something that we're gonna be able to prevent. Yeah, and so I'm curious what you think about this because I'm wondering what it is, where this attachment comes from. And by the way, I was practicing this word last night, but you're welcome to laugh at me if I say it wrong. Do you think it's more, wait, I haven't written down, anthropomorphism. Did I say it right? Yeah, there we go. Do we think it's more of that? I'm not gonna try it again, that's the, I would, but do you think it's more of that or kind of like essentialism, right? Like somebody running back into a burning house because their grandmother's sweater that they gave them was there. Do you think that they're getting an attachment because of the experiences they've had with this thing in war or do you think they're seeing it as more a feeling being? Yeah, great question. I think it's all of the above in this case. So anthropomorphism is our tendency to- I'll let you just say it like a pro. I've said it a million times at this point. Yeah, it's our tendency to project human traits and qualities and emotions onto non-humans. So we do this to animals, like we project the guilty look onto dogs that may or may not actually be there. The studies show it's not usually, but we do this to objects, like stuffed animals. And then we really love to do this to robots. And I think part of that is also because they move around in our physical space and we respond to that because we're very physical creatures. So yes, so there's definitely a lot of anthropomorphism in a lot of people's interactions with robots. But of course, like in the situation we were just talking about, there's a lot of other factors at play too, right? There's the whole context, the emotions, the looking for attachment in anything. There's, what was the other thing that you said that it could be? Like essentialism? Yeah, essentialism. Yeah, like the essence of something is in it or memories and things that we just cherish, you know? And the more that you can feel like you've had like a shared experience with an object or that, you know, we emotionally attach to even objects that don't move or have any eyes or anything. So, you know, there was a pet rock craze in the 70s. Yeah, right? Yeah, and then the little like Tamagotches and stuff like that. Right, like we are such social creatures and so I think, you know, there's a lot of that happening. And there's actually, I forgot to mention like there is another big reason why I think it can get problematic to become emotionally attached to robots not just like in the war context but also in the context of robots not robots being kind of, you know, created by humans, deployed by humans, you know, companies and governments collecting data, you know, it's great if you're able to tell a robot all your secrets but, you know, the robot can tell the secrets to someone else and that might not be in your best interest. So that's the other aspect, consumer protection and other data privacy issues that worries me. Yeah, no, absolutely. And yeah, all that. See, I just love thinking and just seeing all this stuff in like different ways and this might be, you know, more of like a psychological question but like when we were talking about like, you know, we're social creatures and looking for attachment and I'm personally not a huge fan and maybe you, you know, being at MIT and everything I don't know what your thoughts are but like this idea that like technology is destroying everybody. Like I am a huge mental health advocate and all that but do you think that as technology becomes more of a part of our life, like do you agree with the argument that, oh, humans are lonelier than ever and, you know, and now we're just looking for a connection and because I do wanna dive into the topic of like robot relationships in a minute, right? But what are your thoughts around that? Like do you think that we're lonely and we're more likely to become really attached to these objects and all that or? I mean, loneliness is certainly something that can contribute to anthropomorphism but I don't think that that's inherently a bad thing and like I said, we're in, we are such social creatures we look for connection and everything and I think that there's a lot of moral panic that happens when people first get confronted with the idea of robot relationships because they're like, oh, that's gonna replace our human relationships and that's not how we solve loneliness I agree that that's not how we solve loneliness because robots can't replace humans but there's also, you know, the fact that people used to say the same thing about dogs when they started to be part of the American family where some psychologists were like, oh, you know this is sometimes these emotional attachments can be really unhealthy and they're gonna start replacing human relationships and well, that didn't really happen like we're not really worried about people getting pets because we understand that that's a supplemental relationship it doesn't take away from anything and if someone is lonely and they get a dog we're like, great, like, you know that might help you a little bit it's not gonna replace a human but we're not gonna take that away or say that we shouldn't give people dogs, right? Yeah, yeah and you know, my YouTube channel started out just purely like mental health and I had people on who had emotional support animals and everything like that that helped them with agoraphobia and trauma and you know, like so many things it's like nobody would argue that that is bad and kind of like you talked about just domesticating animals and I always say that like maybe I'm just like this technological like optimist, you know what I mean but I'm just like, no, because I used to have wicked social anxiety like me and you couldn't even have this conversation but I know people still struggle with that and maybe some kind of robot relationship or you know, some artificial intelligence that can help them start conversations and get some interaction can then be training wheels and to them, you know, going out so maybe I'm too optimistic, I don't know I mean, there's some really cool research on robots and autistic children like using this as a therapy tool not as a like replacement but as a facilitator of conversations and engagement and with some pretty cool results I do think there's like in the public discourse about this, there's two sides both of which I think are wrong so there's the moral panic side that's like, oh, all of this is bad we should definitely not do any of this which I'm like, I don't know like technology is a tool, it might be useful I agree that there are a lot of problems with how we currently develop and use technology and but you know, there might be some positive use cases to lean into here and then there's the other side that's like technology solutionism which is kind of, I mean there are some people saying oh, we have a loneliness epidemic let's solve that with robots that I think is also not the right direction to go in like this is not a solution to societal problems in fact, when it comes to robots we focus so much on the robots themselves whether it's to solve our problems or whether it's to pin our problems on them when really, there are a lot of issues that don't have anything to do with the robots and have more to do with how we've structured our economic system Yeah, yeah and I think that's a lot of it comes down to this black and white thinking and we gotta find that middle ground and the balance like balancing the benefits of robots while also maintaining human connection and one of my favorite analogies that I've now passed down to my son is just, we look at different things whether it's social media or technology we look at them as tools, right? Like a kitchen knife, my son and I, we love to cook and when you're using it properly you can make a pretty sweet meal but if you're not paying attention you chop your finger off, right? So it's like with robots and technology we find that balance like we take the good, right? When we kind of monitor it and feel it out but with this kind of fear of robots replacing us and I wanna get into some scary topics in a second but let's start with like human relationships so I was watching, I think a documentary and maybe like a vice thing like a few weeks ago or a month ago but it was either in China or Japan I believe but there's way more men than women. So sex robots are a lot more popular there, right? And some people were saying like, oh, this is just like an adaptation because there aren't enough mates. So you also see people obviously in the United States where there's plenty of men, there's plenty of women or whatever, like is this something that people are concerned about? Like building a connection with a robot and taking people out of the mating pool and then all of a sudden our population is gonna dwindle down to nothing and then aliens are gonna take over cause we don't have enough people or what? What's gonna happen? Well, I think you already put it quite well which is, what's the cause and effect here? You have a societal problem that can't be solved with sex robots and it's also not going to be, I don't think it will be exasperated by sex robots either because sex robots are not a good replacement for people and most people, you can enjoy a sex robot. It might help you get over a hump or whatever it is. I don't know. I haven't actually seen any good sex robots out there. So people also overestimate like where the technology is but you can't like I maintain you cannot replace a human especially not with our current very, very primitive technology nor should we try to do that. That doesn't make any sense to me. These are usually political economics societal problems that we need to deal with and I don't think the robots hurt or help in any way. I think that they're a fun tool to use for other things. They're not gonna solve larger issues. Yeah, and it's interesting. There's people who have like married like inanimate objects and I think that what is there? There's like some show called like strange love and like some lady like married like a carnival ride and stuff like that. And I'm like, okay, maybe there are some people who would go that route but it's like a one-off occasion. I don't think this is like an epidemic that we're gonna run into. Like everybody's just dating sex robots now but here's something I would love for you to talk about because I keep seeing it pop up everywhere. And I think what your book was one of the first where I heard about it, The Uncanny Valley. Is that something like when you say people overestimate the technology and when people are like afraid that robots are gonna take over and replace humans. Can you kind of discuss what The Uncanny Valley is and then kind of your views on it and robots like replacing all these humans? Sure. Yeah, so The Uncanny Valley is kind of a design issue. It's a theory that a Japanese professor came up with a long time ago and it's been kind of it's still often very often thrown around in the robot design community. And The Uncanny Valley is that the more a robot looks like a human, like we will increasingly like it and then at some point if it gets too similar then it just like it dips and we become kind of creeped out by it. Yeah, we freak out, yeah. Yeah, anything that's too close but not what not exact is gonna freak us out. And there's studies that have like disputed this but it just, it keeps hanging on because there seems to be something to it, right? And I think it's about expectation management. So I think that one thing that's really clear in robotics is that if you create something that looks like a cat, for example, I have one at home, it's not here. There's this cat robot, when you watch it move you're expecting it to behave exactly like a cat. And the minute it makes a movement that doesn't look quite right or like meows in a different way, it breaks the illusion and you stop, you're no longer able to suspend your disbelief and just believe in this thing being alive. So I think it just kind of creeps people out because it's behaving in a slightly different way than they expected to. Whereas if you design something like a Roomba that's just a disc or I don't know even the baby dinosaur robot because no one has ever interacted with a baby Camarasaurus before, like it's much easier to just imagine that whatever movements it's doing, makes sense for it's being, you know. Yeah. And here's a weird random question. Do you like scary movies? I'm not trying to get all scream on you but do you watch like scary movies? I'm not a huge fan of scary movies but I will watch them. I wasn't until recently my girlfriend who's into all this horror got me into it. But when we're talking about this and you're talking about this kind of expectation and breaking the illusion, I'm wondering if that's what freaks people out about scary movies when you get something that's like getting all contortiony and like, is that kind of, do you think that's kind of similar? Oh, totally. That kind of messes us up. In fact, I think on the original on Canny Valley graph, zombies are an example, something that we find creepy because it's human-like but the zombies don't quite move like us. Yeah, yeah, I remember, I couldn't even watch the Michael Jackson thriller video. Like it, I was just like, nope, nope. People don't dance like that, you know? But yeah, I have a little bit more of your time and I want you to help me out. So I want to kind of get into, you know, just kind of the fear around where technology is going and all that. And I want to talk about AI. Like I've been a computer nerd my whole life but I don't know anything about programming or anything like that, right? But I personally feel like there's a lot of crazy fear around artificial intelligence and there's like books, like people are writing entire books. You get someone like Sam Harris on a podcast, they'll talk for three hours or Elon Musk is like, oh man, you know, and they talk about, I forgot what that, what is it, Moore's Law? We're like computational power just keeps and they think that just goes with anything that is technological. But like when I think about it and I love neuroscience, I absolutely love it. But when I think about the complexities of the human mind and human behavior, I'm sitting here, I'm like, it's gonna take thousands of years for anybody to get close to human intelligence. So tell me if I'm dumb or what your thoughts are on that because maybe I'm just doing this to not be freaked out. What are your thoughts? No, I think that most people who actually work in AI and actually are doing this research agree with you that we are nowhere even close to understanding how we would even be able to get to an artificial general intelligence. Right now, the artificial intelligence that we have is very narrow. People project more onto it than is there because it only works within very, very narrow constraints and it can do specific things really well. But as soon as you take it out of that context, an AI can't, I don't know, like I'm sitting here talking to you if something bursts into flames behind me, I know what to do. Like I know to leave and get the fire extinguisher and AI doesn't know that, right? So there's, I think there's some, it's almost this religious belief that we're gonna have this massive breakthrough and suddenly have beyond human intelligence. And it's not that we don't often have technological breakthroughs that people didn't anticipate or thought would take longer than they did, but it's just very unlikely that this is gonna happen in the way that people fear, given where we are with the state of the art technology. And I also, it's not clear to me that intelligence is linear. These people seem to assume that it's like, now it's like it has the intelligence of a newborn child. And then at some point it'll have like artificial super intelligence and like know how to outsmart us. And that's a lot of like projection of human-like intelligence onto something that's not human intelligence. When really like machines are already way smarter than us and doing calculations and whatnot. And then, yeah. And then we're way smarter than them. And like, oh God, I forget his name. The guy who runs the website pin board, he's like this, he comments on technology a lot and he's pretty funny. And so he's made all of these examples that I think are pretty funny, which are like, you know, we're smarter than cats, but if you need to bring your cat to the vet and get it into the cat carrier, like you're gonna have a hell of a time. So, or he's like, yeah, who's to say that artificial super intelligence wouldn't spend its days in like deep depression about the state of the universe or like worrying about artificial hyper intelligence. Or, you know, we don't really, we don't really know what super intelligence means. It seems like that's a fun problem to think about, but that it's getting a lot of attention. And I think that attention is very much influenced by science fiction and pop culture. It's like this constant recurring narrative that we are obsessed with, something coming to like outsmart us and replace us. Yeah, yeah, when I just really got into books, fun fact that I don't think even many of my listeners know, I didn't, I didn't read until I was like 32. Like I just didn't like books and I just started getting crazy about it. But the thing I got really into first was like human behavior and irrationality, right? And I love like Daniel Kahneman and just learning about like different heuristics and biases and all these things. And I don't know if that's helped me think about AI a little bit better because I'm like, if we don't fully even understand humans and how our brains work and our decision-making process is so terrible, like how many of us have friends that keep getting into terrible relationships or keep going after terrible jobs or keep buying stupid stuff, right? And now we expect people with that same irrationality and we see scientists like you probably, you don't have to name names but you probably work with people at MIT and you're like, what are you doing, right? And then we think that like, oh, we're just gonna program a robot that just makes flawless decisions and is gonna take over the world. Like to me, I'm just like, I don't think so because to get something from point A to point B, even learning about self-driving cars and the tests that they do on those, I'm just like, oh, I don't think I have to worry about like a terminator extinction event anytime soon. So- Now, most robots, you just dump a bucket of water over them and you should be fine. Yeah, it'll be absolutely. So anybody's listening, if the robot apocalypse comes, just take them to the ocean and we'll be good. And that's actually what I wanna talk to you about next is self-driving cars. So, and we don't even have enough time for all the questions. Let me see, let me prioritize this real quick. Okay, so we got self-driving cars, right? And we probably shouldn't beat them up because they're, you know, which would treat machines nice and all that, but segueing from our conversation about artificial intelligence and how smart it is, I'm curious where your thoughts are cause like Elon Musk asked like, we're gonna have like self-driving cars tomorrow and then the stock prices explode and all that. But in your book and some other books I've read, it doesn't seem like we're there, like we're close. And if we have time, I would also like to talk about like possible regulation and stuff like that. So as of right now, July 2021, where do you think we are with self-driving car technology? Yeah, I mean, we're not there yet. And the problem is really like, we've gotten a good part of the way there, but the problem is you have this really long tail at the end of unexpected things that can happen that machines just can't deal with yet. Like I keep seeing really funny ones happen. Like did you see, there was one going around recently where an automated vehicle was behind a truck that was delivering like stop lights. It had stop lights inside it and the car could somehow detect them. Like, or like a car will be behind a bus and there's like an ad printed on the bus and the ad has like a human on it. And so the car keeps stopping because it thinks someone's trying to cross the street. Like there's just so much stuff that like, and you have to teach the machine so much to, you know, for them to be able to deal with any unexpected situation. And the problem also with the cars is that, you know, there's so much human psychology that goes into that too in terms of people trusting them. Like maybe the cars are already safer than human drivers, but people don't trust them. People don't wanna put their kid in an automated vehicle and send them off to school by themselves. What if an accident happens? People freak out. Like we wait that way more. And then there's the psychology of handoffs within the car. So like there've already been some Tesla accidents where clearly the driver wasn't paying attention and Tesla's like, well, the driver always needs to pay attention because it's not, you know, fully autonomous. But like if the car is doing so much of the decision making, it's you cannot ask a human being to pay attention if the car is doing 90% of the work. It doesn't, our brains don't work that way. So there's a lot of complications. Yeah, and it's interesting too. Like, I, you know, I love data and statistics and stuff, even though a lot of it is like misleading and all that. And one of the primary arguments is here's how many accidents happen every year, but you get a self-driving car and that eliminates and, you know, it eliminates it. And a couple of weeks ago, I had, you know, famous skeptic Michael Shermer on here. I love skepticism and thinking things. And something that he says or like Steven Novella, when it comes to skepticism, when conspiracy theorists are like, oh, towers don't fall like that, like on 9-11. And then like the first question is, how would you know what a tower falls like? But I kind of translate that over to this data around like self-driving cars are gonna be safer. Like we've never had millions of self-driving cars on the road. We've never had millions of self-driving cars next to millions of normal people either, right? How do I know that, you know, when 50% of the cars are self-driving and people look and see nobody in the car, they're not just gonna swerve off the road and cause, you know what I mean? Like we just don't know. So I guess, you know, one of my last questions is, and not just self-driving cars, but like robots and technology. And I don't know if, you know, the ethical debate comes into it as well, but what are your thoughts on where we're at with regulation? Like I personally find it kind of crazy that a car company could be like, hey, we have cars that could drive themselves. And they're like, okay, we'll just put them on the road. You know what I mean? Like you can just do that. So what are your thoughts on regulation? Do you think that things are gonna be more regulated? If we start making like robot humanoids or pets or whatever, like how much, how much should the government be involved, especially since they don't know diddly about technology? Yeah, yeah, that's the main problem right there, right? I mean, so my friend Madeline Ellish is an anthropologist and she always says that when we talk about these technologies we need to stop saying deploy and start saying integrate because it prompts the question into what? Like you just said with the, you know, what happens if we have all autonomous cars or half autonomous cars? Like what happens around that to the behavior in the systems? And it's, so it's often like way more complex than just putting a technology into the world and letting it like do its thing. And I do think in terms of regulation, like there's been a lot of recognition recently that we need to be thinking about this more and we need to be, you know, whether that's, you know, the impacts of AI on the workforce, whether that's regulation for automated vehicles and drones, whether that's, you know, consumer protection issues in privacy and data collection. There's so many issues happening right now, biased AI. It's huge, huge, right? And you're right that, you know, the government, I mean, it's not that they know diddly squat, but, you know, we need way more technologists who are embedded or talking to policymakers than we currently have. And it has to go both ways. You have regulators to be talking to people developing the technology too, which I didn't used to believe. And I believe now because standards get set in technology and the design process early on. So I think that we just need way more cross-pollination and it needs to happen sooner rather than later because, you know, while we aren't very far with developing artificial super intelligence, we are already, you know, deploying or integrating a lot of these AI technologies in the world right now that we need to be thinking more deeply about. Yeah, yeah. Something I've really been interested in lately too is like we were talking like the biases in algorithms and AI and like, I've been reading a lot about that. And I'm like, like there's a book, a great book, Algorithms of Oppression and stuff like that or even Weapons of Math Destruction, just phenomenal books. And I'm like, wow, you know, so these are things that policymakers and half the time it seems like when they're talking about it, they're like, Google censors, you know, conservative news on here. And it's like, hey, there's bigger issues going on than, you know, because they don't even understand that, you know, Google's showing me what I wanna see. Like me being like a left leaning liberal, it's not gonna be like, hey, you wanna check out this Fox News article? It's just not gonna do that. It wants to keep me on the platform and all that. But yeah, I get concerned because I've only really watched like the hearings when they're like talking to like Mark Zuckerberg or like Jack Dorsey. And they're like, hey, so do you think it's, you know, it should be legal that I can like tweet from my Facebook and you're like, what? Like they don't even understand how social media works. So like you talking about robots and AI, I can only imagine how little they know about that stuff if they don't even know how Facebook works, you know? So yeah, it's all really interesting. I wish I could just keep you here all day. But I'm gonna let you go, but before I let you go, where can people get the book? Where can people find you? And is there anything else cool that you're working on that we should stay tuned about? So the book, I don't know, any retailer, I don't wanna promote the major ones, but obviously the book is there. The New Breed is what it's called. And then you can find me on Twitter is probably the best way to find me, grok underscore. And I am really, my maternity leave ended today, but I'm back in the lab in the fall and we have some experiments that we're already cooking up that I don't know, I can't say too much about yet, but I'm really excited about. That's really cool. And before I let you go, I have one question that I've been trying to figure out since I was aware of your existence. What does your Twitter name mean, grok underscore? So I used to have the original grok without the underscore and then Twitter removed it because I didn't use the account enough way back in the early days of Twitter. Grok is from a science fiction novel, Stranger in Strange Land. It's actually not a very, I actually don't like Heinlein that much, but I've always loved this word. It's a Martian word that means to drink is the literal translation, but it means to understand something fully and completely. I love it. I love that so much. That's amazing. Yeah, I will link Kate's social media and the new breed down below. So everybody follow and get a copy of the book. Kate, thank you so much. And I feel honored that your first day back from maternity leave and you spent your time with me. I appreciate that. But thank you so much for coming on. Oh, thank you for making my first day back so much fun. Absolutely.