 You know, help us out, do the thing you do to goose the algorithm and like and subscribe so you can find out about next time we're live. I think air greetings. There's the feedback and we are ready to start. Our guest today is Cade Metz. He is a technology correspondent within New York Times covering artificial intelligence, driverless cars, robotics, virtual reality and other emerging tech. Before that, he was a writer at Wired Magazine. His new book, Genius Makers, The Mavericks who brought AI to Google, Facebook and the world was just released by Penguin. And Ashley Vance, the New York Times bestselling author of Elon Musk, Tesla, SpaceX and The Quest for the Fantastic Future calls it one of the most surprising and important stories of our time. Cade, welcome to the IRM Media Podcast. Glad to be here. Now today, the state of artificial intelligence in three acts will start with a discussion about what AI is and what it isn't. We'll talk about what it's really capable of and who's most likely to benefit from it. Then we'll talk about the ethical challenges and dangers that AI poses and how likely they are to come to pass. We'll wrap it up with a discussion about the impact of public relations and IRM Media on the technological development of AI with New York Times reporter, Cade Metz, author of the new book, Genius Makers, After This. Act one, these days it seems like every software product or app maker out there is promoting themselves as having artificial intelligence. The term has become so overused and widely misunderstood, but what really is artificial intelligence anyways? I'm here with New York Times reporter, Cade Metz, who has a new book out on artificial intelligence called Genius Makers, the Mavericks who brought AI to Google, Facebook and the world. He's been tracking developments in this space for over a decade. Cade, why do you think Ashley Vance called Genius Makers surprising? What surprised him about the book? Well, it's about an idea, a particular idea in artificial intelligence that dates back to the 50s that even people in the field were skeptical of and thought would never work. And then around 2012, it started to work. It's called a neural network and we can talk about what that is, but basically the single idea started to work about a decade ago and it is now moving into so many of the technologies we use on a daily basis and changing the way we use our phones, the way we use robotics, what those services and systems are capable of. It's something that was surprising even to the makers of the technology and that's what the book is about. Why now? I mean, you've been tracking artificial intelligence for 10 years. Why did you write this book now? Well, because that change had really happened. It's a very real change. And I like that you talk about AI being this term that is overused and applied to everything. It's confusing for the layperson what is and what is not AI. What I wanted to do with this book is push all that aside and show people what has really happened, the way this single idea has actually changed things. And I wanted to do it through the people who built this technology. Any good story is about people and that's what this book seeks to do. It seeks to go back to the roots of this idea which goes all the way back to the 40s and the 50s and follow that up to the present day through these key moments around 2010, 2012 in particular where that idea started to work and follow it up to the present day and look at how it might change our world in the years to come as well. So the term artificial intelligence truly is as broad as the term medicine, right? I mean, what is artificial intelligence? It is or maybe it's even broader. Like it's so difficult to define. What I often say is that the original sin of the founders of this field, the people who gathered at Dartmouth University in the late 50s and coined the term, the original sin was that they called it artificial intelligence. It's immediately misleading, right? It's certainly misleading the 50s when we were nowhere close to having a system that behaved like the brain and it's still misleading today because we don't really have a system that behaves like a brain. We and I often underline this for people, we don't know how the brain works. Even now we don't understand the brain. So recreating the brain is from the get-go quite a task. We can mimic the brain in very specific ways, limited ways, but we can't mimic everything that it can do today. That term though implies that we can. It implies that we have systems that mimic the brain in extravagant ways, right? It brings up these visions that are lodged in the back of our mind from science fiction movies and books, right? We hear the term we imagine how in 2001, at least subconsciously, or so many other tropes from science fiction. And that's not exactly what's happening. We're moving in that direction in some ways and you can see sort of sparks of that, but we're a long way from that as a reality. So what is artificial intelligence? When someone says, oh, my software has artificial intelligence or my app has artificial intelligence, what claim are they making? Well, I don't wanna speak for a minute. They're making all sorts of claims that in many cases are bogus, but let's do this. Let's talk about a subset of that and talk about what is actually working, okay? Let's talk about that idea that I mentioned, the idea of a neural network. That is central to so much of what has happened over the past decade and it's very easy to understand. A neural network is a mathematical system that can learn skills by analyzing vast amounts of data. So the example I always give is, if you have thousands of cat photos, for instance, you can feed those into a neural network and it analyzes those photos. It looks for the patterns that define what a cat looks like. And in that way, it learns to recognize a cat. That's what drives the system on Facebook, for instance, that can recognize faces and photos. It's what drives the speech recognition service on your iPhone when you speak commands to Siri and Siri can recognize the words you say. That's also a neural network. It's trained in the same way. You feed thousands of hours of spoken words into a neural network and that's usually old tech support calls of all things. It analyzes those spoken words, looks for the patterns that identify particular words and it learns to recognize those words. That basic idea is used in so many other things that we can go into during this conversation. But the reason that is important is that the system is learning the task. It can learn the task by analyzing the data much quicker than engineers could ever tell the system how to complete that task. It can learn it as opposed to engineers line of code rule by rule defining what a cat looks like. That is too difficult for engineers to do. Even a cat photo is so complex, you can never define all the possibilities, all the different breeds of cat, all the different angles you can shoot a photo from. All the imperfections in the photo and the movement, you can never define all the possibilities. But if you give a neural network all those photos and let it do the analysis, look for those patterns, it can learn that skill on its own. That's a powerful thing. And that's- So is a neural network machine learning? Is it the same thing? It's a subset of machine learning. Machine learning is another general term. It just means a machine that can learn a skill by analyzing data. But this is something very different from the machine learning we've had in the past because it learns from such large amounts of data and it learns these very, very powerful and specific skills. It learns from more data than we humans could ever wrap our heads around, right? We're not seeing all those patterns in the cat photos. It looks for and finds those patterns on its own and learns that skill largely on its own. It's a powerful thing. But it's much easier to teach a machine to do one thing over and over again than it is to get a machine to think like a human, right? So what is this concept of AGI? What is artificial general intelligence? Well, people have dreams of taking this idea and expanding it to the point where you have a giant neural network that can essentially analyze everything that we would experience in our daily lives and somehow from that build a system that can do it all, right? So think of it this way. Now, I'm just speaking in broad strokes here, but if you could model the world digitally and then you could have a neural network analyze all the patterns in that model of the world, people believe you could reach a system that could do anything. I'm speaking broadly, but that's the basic idea here. We're still a long way from that. There are two labs in the world who say that their stated mission is to build AGI, a system anything the brain can do. That's DeepMind in London, OpenAI in San Francisco, but they don't necessarily know how to get there. That's gonna take a lot of doing. Right now we have systems that work in very specific ways, speech recognition, image recognition. You're starting to have systems along these lines that can analyze vast amounts of written text. So digital books, Wikipedia articles, other content from the internet and learn how we humans piece English together in other languages. And it can better learn to not only understand the world recognize the words you're saying, but understand those words, actually understand what you're asking for. For instance, if you're talking to a machine and then respond to it, we're seeing a lot of progress there. See a lot of progress with robotics. If you can recognize a cat in an image that can help a robot learn to recognize a cat, learn to recognize other things around it and respond to it. We're seeing progress there as well. So there are these particular areas where we've seen a lot of gain over the past 10 years and we'll continue to see gain. But that doesn't necessarily mean we're gonna see a machine that can do anything that human brain can do anytime soon. Like it's one thing to create a robot that can sort products in an Amazon warehouse and pack an order. It's another thing to create a robot that could have this conversation with me. So when you talk about these two labs in London and in San Francisco, which you write about in the book who are focused on achieving artificial gender intelligence, you use the word dream. You use the word hope. Almost as though it requires blind faith. And in the chapter on that in the book, it's titled religion. So why is the AGI argument a religious argument? Well, like with any new and ambitious technology you need belief in order to build it, okay? If you're gonna build Facebook you've gotta believe that you can do it. If you're gonna raise the money and attract the talent, you better believe it. This is that same attitude but applied to just enormously more complex and ambitious project. So if you're gonna build AGI you better believe that you can do it. And a lot of people really do have that belief but that doesn't mean that you're gonna do it as soon as some people say or may think, right? It's believing is one thing and actually doing it is another. And what you do see is that there are people in the field who really believe this is gonna happen. You see other people who are just as bright, just as accomplished, backed by just as much money who think that idea is bunk and that's not gonna happen anytime soon. You get a real disagreement here. And it is almost like a religion where you have this kind of spectrum of belief even where there are various places along that spectrum where the belief rises and falls. So in his book, your colleague, Thomas Friedman in his book, several years ago, thank you for being late wrote an entire chapter on Moore's law. And Moore's law basically says that microprocessors double in capacity every 18 months and they also half in price. So that's exponential growth from a processing power standpoint. Yet, we hear from the folks in crypto that the ledgers take incredible amount of processing power to maintain. We hear from you here in your book about the incredibly intensive process of computer processing demands to satisfy AI. But does Moore's law somehow ensure that AGI is right around the corner? Some people argue that it does that as we get a greater and greater processing power that we're on a path to that. I mean, generally speaking, of course, the way this works today is the more processing power that you have, the more data you can analyze and the faster that idea that I've been talking about can progress. And that's what you're seeing now is you're seeing these very large companies apply more and more processing power to the problem and you see it improve. You know, a really good example that I've been writing about recently at the times is a project called GPT-3, which came out of open AI. The lab in San Francisco founded in part by Elon Musk. It's that giant neural network that analyzes all that digital tech. So all those books and Wikipedia articles and other content from the internet, it spends months analyzing all that data. And we're talking about hundreds of chips in a giant data center that crunched through all that data. What we have seen with projects like that is that the more computing power you throw at the problem, the better the technology gets. So there is some truth to that. That doesn't mean that we can scale this all the way up to AGI at the same rate. We're already starting to see limits to that particular idea. We're running out of juice when it comes to analyzing digital texts in that way. Will it work as we acquire more data and model the world the way that I was talking about? We'll have to see. But you do see a growth curve there. Talk to us for a minute about whether or not we should be worried that AI is gonna take our jobs. Like obviously I understand if you're sorting products in a warehouse, AI has already taken your job. But for those of us who are knowledge workers, should knowledge workers be concerned that AI is gonna take their jobs or is it already taking their jobs? Well, when people talk about concern over jobs going away, the real worry there is whether or not it's gonna take them away in the short term, right? Before you have time to train the population for new jobs. We haven't really seen that. And actually, the warehouse problem you talk about is can be illustrative here. So yes, we now have robots that can help in the warehouse, help sort through all the goods that come into an Amazon warehouse and need to go back out. In the past, it's always humans that had to sort through those bins of random stuff. We're starting to see systems that can do that. Now, you might say, well, that's gonna take away all those jobs. But if you step back and you look at the situation, what we're seeing, particularly amidst this pandemic is that we're relying more and more on Amazon. Amazon needs to grow these warehouses. It's having trouble staffing those jobs in a lot of places. I've been to these warehouses in areas where they have trouble getting the human labor as that market grows. So as the robots come in, they're not necessarily replacing people. In some cases they are, but we need to think about this in a broader sense. Another good example is the trucking industry. We're on a path towards self-driving trucks. And you can say, well, that's gonna take away jobs for truckers. And in the long term, it certainly will. But at the same time, the number of truckers is on the decline. That population is aging as well. And so what we're not seeing is vast amounts of jobs suddenly go away. And what we're also seeing, and I wanna underline this too, in all these areas, the technology is still flawed. In many cases, it can't necessarily replace a human or it needs to be used alongside a human. And so even in these areas where it's working well, you're not necessarily seeing a huge threat. We now have these systems that can generate language on their own. Like GPT-3, that's one of the things it can do. It can generate blog posts and articles. And in some ways, they're remarkable. But it can't do it every time, as well as a human reporter can, a human writer. We still need people to do that sort of thing. So in the long-term, machines are gonna do more and more. But that kind of moment when all the jobs vanish, we're not seeing that yet. We're not seeing evidence that will happen anytime soon. In his book, AI Superpowers, Kaifu Lee published this risk of replacement for cognitive labor graph. And in the right, he says, he calls it the safe zone, the upper right. And the jobs he says that are safe from being replaced are a concierge, a social worker, a psychiatrist, a PR director, a criminal defense attorney, a CEO. And then the danger zone jobs, he lists our customer service rep, radiologist, personal tax preparer, insurance adjuster, consumer loan underwriter, telemarketer. Any thoughts on this? Have you seen this? Did you give this some thought as you're writing your book? Do you agree, disagree? Yeah, I mean, generally speaking, that sounds about right. That second group, those are the areas where machines are getting better, right? We're seeing machines that can better understand language and better respond to it. And that fits into so many of the categories you talked about. Machines already can deal in simpler ways with Excel spreadsheets and other simple forms of technology and they can replace humans who have done that in the past. We're seeing that. You mentioned radiologists. So that's one place where these neural networks are really powerful. They can look at medical scans, whether they're x-rays or CT scans, eye scans and look for signs of disease and identify those signs. That can help do the job of a radiologist and other doctors. But even then, countries, hospitals have been slow to adopt that technology because we wanna make sure they do the job well. Typically they're used in tandem with a human. And for the foreseeable future, that's the way it's going to work. Where they can really be powerful today is where you may not, in countries where you may not have a doctor on hand and the machine can be the first line of defense, so to speak, you identify a possible problem, you alert the doctor. So in the long run, in the distance, Kaifu Lee is right on all those areas but that doesn't mean all those jobs are just suddenly gonna vanish tomorrow. The ethical challenges and existential threat of artificial intelligence with Kate Metz, author of Genius Makers, after this. Kate, is the public perception of what's possible with AI real or overblown? Well, the answer to a lot of these questions is a little bit of each, right? It is overblown, especially as people like Elon Musk stood on their soap boxes and said that we're on a path towards AI destroying humankind if we're not careful. That gives a false impression. It makes it seem like that risk is close. But there are areas today where we're already seeing issues with the technology and areas of real concern. These systems do learn from enormous amounts of data and that means that we're not always gonna see everything they learn, realize where they might learn things that we don't want them to. And the prime example right now is these systems can be biased. They can be biased against women and people of color. If you give them data that's biased, they are going to exhibit that bias. These systems like GPT-3 that train on all this text from the internet. The internet for anyone who's used it, right? Is, you know, anyone who's used the internet knows the internet can be biased. It includes hate speech. It includes language that you don't necessarily want your machines to learn. But the way these systems work, they do learn those flaws as well as the positive aspects that you might want them to learn. That's an area of concern. Another, I mean, there are so many other areas we could discuss. Surveillance, these technologies are particularly powerful when it comes to surveillance. In China, this type of technology is already used to identify an ethnic minority. That's a real area of concern for a lot of people across the globe. Autonomous weapons, you know, we're on a path towards that sort of thing. You know, if a self-driving car or robot can recognize the world around it, so can a self-flying drone. That can be a way of targeting things on the battlefield. You know, these are things that we need to start thinking about now, separately from this idea that AI will somehow turn on humanity and destroy the world. You know, the section of the book on autonomous warfare is just fascinating. Talk to us about Project Maven. What is that? Well, it's what I was just talking about, right? It's an effort by the Department of Defense to build a system that can identify objects in drone footage. So identify cars and people or buildings. That's a way of doing surveillance for the military, but it's also a path towards autonomous weapons. That same technology can be used on a system that has a weapon on it. There were many tech companies who started working on this project with the DOD. What you see in the book is that the people who specialize in this technology, they were mostly academics. They went to work for these giant tech companies about 10 years ago when the technology took off. And so that's where the DOD went for the expertise. All right, these technologies were being built inside the Googles and the Microsofts and the Amazons. And those are the companies that went to work on this project. At Google, there was an issue. Several mini Google employees were upset by the fact that Google was working on such a project and protested the effort. Google ended up essentially pulling out of that project and left it to other companies. It's an example of where this concern can pop up, even inside the companies building the technology. And where are we today with drone warfare? I mean, do killer robots exist? Well, we're not there yet, but we're on that path. What we now have, I just did a piece in The Times about various companies that are building self-flying drones. And these drones are very good at flying on their own and identifying various objects around them. The companies building this technology are willing, many of them say, to put a weapon on it. So they see this as a path towards that type of thing, what people call killer robots. What they mean is an autonomous weapon. We're not there yet. The DoD, I spent a lot of time talking to people inside the DoD as well. It's something that they are working towards, but we're not quite there yet. What we do need to do though, is as we work in that direction, think about the consequences. When do we want a human in the loop to make sure these are used in ethical ways? If I remember correctly, at one point in the book, there is a Shanahan who is a military official who's quoted as saying, we don't think any defense systems moving forward should be without an AI component. Absolutely, but again, that term can mean a lot of things. These are powerful technologies. It's no surprise that the military want to use them. That doesn't mean we're gonna put systems out into the field that are making all the decisions. We don't have machines that can do that yet. You very much need human help. And a lot of the thinking is that that will continue to be the case. You will continue to have a mix of machine and human, but we do need to keep an eye on how this progresses. You talk about how we need to start thinking about these things now. Are there any proposals for regulating AI that have promise? There are. It's a difficult thing when it comes to regulation. The technology is improving at such a rapid rate. The problem is often you get the regulations out in the technology changes. We just had last week proposed rules for regulating AI from the European Union. And they're very broad. They look at restricting face recognition in public places. They look at restricting the way these systems can generate disinformation. We've talked a lot about how these systems can recognize sounds, images, or text. They can also, if you flip them upside down, so to speak, they can also generate all those things. If you can recognize a cat image, you can generate a cat image. And what we're seeing is these systems can generate blog posts, videos, even the sound of your voice. That's a danger. And the EU rules also seek to restrict them, force companies to identify what they call deepfakes. These false images and texts are created by machines. That's an area of concern. There are other areas where not just the EU but other governments are starting to think about how we can regulate these things. But it is very difficult. I do wanna say that. If the EU cracks down in some ways, that's not going to prevent other countries, companies and other parts of the world from doing similar things. There are some of these issues where we need to think about this globally. Autonomous weapons is the best idea, right? If you crack down on that in one country, that doesn't mean it's not gonna happen elsewhere. What do you think of Shoshana Zuboff's recommendation that it's really data collection at the top of the funnel that needs to be regulated? Well, I mean, we've spent this podcast talking about the fact that these systems learn from data. It's fundamental. And you see this in so many ways. In the book, you see this pop up time and again where these efforts to collect the data and it's the big internet companies that have that, right? That's the real currency. And that's how they're able to build so many of these systems. You and I contributed to that on a daily basis just by uploading all this data, these services or another example I often give, anyone who's ever used an internet service and it wants to make sure you're not a robot, right? And it gives you what's called a CAPCHO, which is just a test to make sure you're a human. Odds are anyone listening to this, watching this has gotten that capture that says, show me which pictures that I'm putting on the screen that include a car or a stop sign. And we do that. What you're doing is you're identifying the photos that Google can use for a neural network to train itself driving cars. That's what you're doing there. You're telling Google, here are the cars, here are the stop signs. Later that gets fed into these neural networks that can learn to recognize those objects. So on a daily basis, we're contributing to these systems. It's using our data and our brain power in building these increasingly powerful systems. Kate, you write about how Google invested in AI before Microsoft for many reasons, but you mentioned that Microsoft's more mature status as an enterprise may have held them back. Will Google and Facebook reap the rewards of AI or is there a whole new class of startups out there who will somehow inherit the mantle and displace them? It's hard to see startups displacing them at this point. They are in such a powerful position. The currency here is not just the data, it's the processing power as we talked about. Those are two things that the big companies have. They also have the money. That means they have the talent. They have a real advantage right now. There are cases where startups can succeed. Those drone startups I talked about are a good example. They're pushing forward in that area, looking to work with the government in that area. But when it comes to really pushing these types of systems we've been discussing, the big companies have an advantage. What about China? Is the US in an AI arms race with China? In a way it is. I think this is another thing that's often misunderstood, but as you see from the opening scene in my book, from the moment this idea started to take off, a Chinese company was there, Baidu. They were there alongside Google and Microsoft. And you see this as a real area of interest in China. And as Kaifu Li has mentioned in others, China can potentially have an advantage in the long run because it has such a large population. That generates more data in this day and age, in the internet age. In theory, they can produce a greater number of researchers. That's also important. But the landscape is more complicated than people might expect. In the past, new technologies were built inside government labs and held secret. The way this technology is developed is a little different. For various reasons, as you see in the book, the big technology companies openly published their latest results. It's academics who believed in this idea, who came into these big companies, they wanted to keep publishing. The Googles and the Facebooks keep doing that. What that means is that the latest ideas are available to anyone on earth, including rivals of the US. So you can't necessarily crack down on your exports to certain countries to bottle the technology up. At the same time, and this can be hard for people to understand as well, the US really relies on immigrant talent in this area, including Chinese talent. You talked about Project Maven. There were Chinese nationals who worked on that project at Google. People should realize how important immigrant talent is to the US, particularly in the science and technology fields. So you can't just shut down your borders to foreign researchers, right? You end up shooting yourself in the foot. So there's a careful balance that goes on here when it comes to the competition. The concern here in the US is that the US is not keeping pace or won't keep pace in the long run with its rivals, that so much of the talent and the progress is inside these big private companies, excuse me, public companies, and not in government labs, not in universities, and they wanna try to correct that balance. They don't want Googles, say, driving everything because Google's aims and ambitions might not align with the US government, right? Google is a company headquarter in the US, but it's a multinational company, and it's driven by other motives. And part of the concern is that, you know, the progress is happening there and not happening in government. In China, the situation is completely different, right? You have a synergy between government and industry that you don't necessarily have here. Can AI solve the fake news problem and the role of PR and technological development with Cade Metz after this? We're back with Cade Metz. He's the author of a new book, Genius Makers, the Mavericks who brought AI to Google, Facebook, and the world. Cade, talk to us about Apple for a moment because they're pushing the AI envelope as well with Siri, right? Absolutely. What people may not realize, though, is that Siri debuted with a different technology. You know, it didn't work as well as it does today. The neural network idea came in after Siri's introduction, and what you see today is a more powerful version of that. Apple comes at this a little differently than other companies. There was this real race for talent between Google and Facebook and some of these other very big players. Apple approached it a little differently. They're not as concerned with jumping on the big names as maybe some of these other companies. But it has started to hire some really important people of late. John G. Andrea oversees their work now. He once was head of AI at Google. Ian Goodfellow, another key player and character in my book, is now at Apple. They see where all this is going, and they certainly want to roll this technology into so much of what they're doing. But you're right, Siri is at the heart of that, right? The areas where this technology is starting to work are areas that can benefit a digital assistant like that. And can AI solve the fake news problem? No, so you need to think about... People need to think about this a little differently. I know that Mark Zuckerberg has gotten, you know, before Congress and said such things. Identifying fake news is difficult even for a human, right? It's a judgment call. You and I may disagree on what is not fake news, or to speak, and you see this all the time. If we as humans can't agree on what is not fake news, or what is not hate speech, how can you build a machine that can do that? It's a difficult problem even if you're talking about humans doing this. And right now it is humans who make those judgment calls and those judgment calls get disputed. The other thing is that these same systems that can recognize, say, fake news, can also generate it, right? I talked about that. If you can identify what a cat looks like, you can generate an image of a cat. And that's what we're seeing, too, is these systems that can generate fake images and fake videos. So on the one hand, it can help solve the problem. On the other hand, it's creating the problem. It's another arms race. This is a real area of concern that we're pushing towards a point where the systems can generate fake news at a scale that humans never could, right? If they can do it as well as humans, then you can do it much better than humans, just because machines can do this at a much higher volume. That's when it gets really scary. We're all looking at these news feeds all day. And obviously, that takes the concept of if it bleeds, it leads to a whole new level. And as a result, the news has become more sensationalized than ever. And you talk about how the media has overblown the promise of AI. But our incremental technology gains enough of a scoop to garner coverage in this age of sensationalized news media. Or our reporters and journals really being forced to gin up their stories and make them seem as scoop worthy as possible, just to get them printed. Well, I mean, obviously, there's a lot of that. A lot of the coverage in this area is not that great. But I mean, I can only speak for myself and my colleagues at the times. We're looking to tell people the reality here and what is really happening and not gin things up as you say. It can be easy to do and to get attention to your story by ginning things up. But that's not my aim. It's not the aim of the times. I'm not going to deny that it doesn't go on elsewhere. But I think that people need to realize that there are news outlets that are certainly working hard to tell you the reality and not just the hype. Who are some of the news media outlets that you read that you pay attention to that you appreciate? Well, I read my colleagues at the times. There are certain people who I like in this area at the Wall Street Journal. There are some talented people at Wired where I used to work. It's a tricky thing to cover. You really have to understand it at a deep level, but then be able to relate that to the lay person. So you need those two skills. You have to use different parts of your brain. You need to understand what's really going on, but then you need to step back from that and explain to people at a level that anyone can understand what's happening. In the book, you write about public relations and how a lot of the AI organizations and individuals in the space used PR to their advantage in some way. Is there any sort of generalization or general statement you can make about how PR benefits organizations that are developing AI? Well, we talked about being the call, this term that gets thrown around a lot. If you just apply that term to whatever you're building, that might be able to help you. But as you look at what plays out into the book, it shows you in some ways how Silicon Valley works in general. Silicon Valley is often built on that sort of hype. If you want to attract the talent you need to build something, if you want to attract the money, you do need to have that belief and you need to voice that belief and tell people that your technology is around the corner. That's how you attract the money and the talent. And you see that in the book, even with these incredibly ambitious projects. Sometimes it's conscious, sometimes it's unconscious, but that's definitely part of what goes on here is that people over promise. And some people may know they're doing that. Some people may not. Some people may see explicitly the power of that or they may not, but it's happening. That's a very real phenomenon, which I think you can see in the book. Kate, you write open AI said to the press that it was too dangerous to be released. Was that a PR stunt? And if it was, what were they hoping to gain? Well, I mean, I don't know if it's a PR stunt or not. You can never get inside someone's head. But a lot of people accuse them of that. And that is another phenomenon in this world that is counterintuitive. That if you say something is dangerous, it makes it seem more powerful than it is. If you say it's dangerous when it's not necessarily, it makes it seem more powerful as well. And there's a chapter in the book called anti-hype when Elon Musk comes into the picture and he's telling the world that this technology is dangerous. One of the effects of that is that people assume it's more powerful than it is. And that can be kind of this strange inverted PR tool. Again, who knows how conscious that is. So there is this great scene in the book where Musk and Mark Zuckerberg sit down to dinner and even the people at dinner aren't sure what Musk is really thinking. What are his aims here when he says that AI is going to, you know, achieve human levels and potentially turn on us? They're not sure whether or not he really believes that. But that's something that, you know, we're all going to struggle to really know, you know, what people's real beliefs are. But that sort of strange, you know, unexpected PR phenomenon is real. It has played out time and again, and you see that in the book. You quoted Musk as saying, we're headed towards an existential threat or civilization ending. Obviously, you know, we're not in his head. We don't know what he meant, but I mean, you spent a lot of time thinking about it. Do you have a sort of gut feel about, you know, what he's trying to do? Can you explain that thinking in any way or unpack where he's going with that? Well, you know, again, this is tied to this AGI idea that, you know, that we're working towards a system that can do anything. The human brain can do, but be backed by, you know, enormous amounts of processing power. You know, there is this, there is this fear that once it is, it is more powerful than we are, that it will turn on us, that we will, you know, we will build these systems without realizing all the unintended consequences. They will learn things that we don't want them to learn, that they will be motivated to do one thing and we won't see sort of the related motivation that's built in there. A lot of people, you know, do, you know, it's not too strong a word to say scoff at that idea. You know, people, you know, who were at that dinner with Elon Musk who do scoff at that idea that we're, that's not something we need to worry about today. But, you know, it's something that a lot of people in the field feel differently about and do sort of voice many of the same fears and concerns that Musk does. Again, you can't get inside your head, but there are a lot of people who see that. So, you know, obviously, you know, for global news, The New York Times is a gold standard brand for tech news, Wired Magazine is a gold standard brand. Could you have written this book and gotten the same level of access if you weren't a Wired Magazine or New York Times reporter? You know, probably not. That doesn't mean it's easy. It's years of talking to people, getting a little piece of information and taking that information, getting a little piece of information to another person, seeing if they can build on it. You know, as you build on it, going back to the first person, it's a hard thing to do, particularly when you're writing about characters that are inside these very big companies where it can be harder to get them to talk. You know, it does help to have The New York Times name behind you, but it also helps to have a track record of stories and relationships. So it probably helps some to have those names behind me, but make no mistake. It's a hard job on a daily basis at the time. It's certainly hard to build a book like that. Okay, that's author of the new book, Genius Makers, The Mavericks, who brought AI to Google, Facebook, and the world. Thank you for joining us. Thank you.