 Welcome back to the ITU headquarters here in Geneva. It's day two of the AI for Good Global Summit. And I'm really pleased to be joined by someone who's kind of, well, put a cat amongst the patients if we're going to put it that way. It's Gary Marcus. He's a writer and an entrepreneur. And basically, if I misquote you, you're saying, okay, AI, what's the big deal? There's too much hype here. Is that right? Well, I think AI is a big deal, but there's also a lot of hype. We've finally figured out how to do some basic things, like speech recognition, and that's great. And it really is a practical real-world value. But there are a lot of things that you might imagine an intelligent creature would be able to do that we've been working on for 60 years and not made that progress. So we still don't have machines that can read, for example. And my friend Stuart thinks we might be there in 10 years, but I don't think we've made very much progress so far. I saw that robot, Sophia, very feminine, had come as features and can make a conversation. That's actually pretty advanced. Yeah, but if you talk to Sophia for a while, you'll realize that Sophia doesn't really understand what's going on. So you can program things like Siri, which have kind of canned responses to particular questions and will sound very clever. So you can ask Siri what Blade Runner is about and some clever programmer said that the movie Blade Runner is about intelligent digital assistance or something like that. So you can build in some dialogue and it's cute, but that doesn't mean that the system really follows along what you're talking about. So, you know, as we're doing this interview, you're starting to understand my position about things and what I believe. We don't have machines that could do that. So you could talk to Sophia, or Sophia, for a few minutes and then try to get her to repeat what's happened and say, well, what kind of person is this guy? She doesn't know. Well, we've all had dates like that. Yeah, well, usually that's because the date's not paying attention, I would guess, speaking for yourself. But seriously, though, when you look at things like autonomous weapons that can kill on the road decision-making, that's pretty advanced. Well, it's not that hard to make an autonomous weapon. It's hard to make one that's accurate. So the state of the art with perception is that we can build AI systems that work, say, 95% of the time, which sounds impressive. But for example, if you built that into an autonomous weapon, it's correct 95% of the time and incorrect 5% of the time. That's really pretty poor performance. Same thing with a driverless car. If you have a pedestrian detector that works 95% of the time, that sounds great. But if you think about it, and you had 10,000 cars on the road that are missing pedestrians five out of 100 times, that would be complete chaos. Aren't you kind of basically just taking the punch ball away as the part is getting going? Well, I think that the punch ball is deserved at some point, but we haven't earned it yet. And we need to be careful about it. And what I was talking about here was we might need new ways to approach AI in order to get to the advanced stage. So in advanced AI, we might, for example, be able to build machines that can help us cure cancer or understand how the brain works and not just recognize your words. So, I mean, if all we want AI to do is to help target advertisements, then we're done. Hooray, Google does that. But if we want AI to really help mankind in significant ways, like helping us solve disease, we're going to need to take it to the next level to have AI systems that can really read the scientific literature, integrate what they read, help decide what the next experiment should be, and so forth. And we're not there yet. And so I don't want us to kind of rest and say, oh, we did so well in the last few years. We're done. I want us to say, how do we take it to the next level? But Gary, isn't that what they're saying? We're on the cusp of the Fourth Industrial Revolution with AI. I don't dispute that. I mean, I think we are on the cusp. But the question is, like, what's the activation energy that's going to actually get there? So we can see where we're going. Everybody can see that AI is going to change the world. But I don't think we have the techniques in hand to do that. And I think they're oversold. So I think people report success on certain things, but that doesn't mean that it generalized as other things. So the DeepMind stuff, for example, is very famous. They built something that could play a lot of different video games. But if you change the scenarios in very small ways, the system sometimes breaks down entirely. So it's not robust. So I mean, imagine that you have a student that gets things right 70% of the time and gives you crazy answers the other 25%. You're not going to hire them to do a mission critical job. You're going to say, study better, work harder. So what was your mission coming here? My mission was to get people to think about a new model. And that new model is AI not done by corporations, not done by individual academic labs, but done in some way, sort of like CERN, having a large scale project with a lot of people working together to solve the hard problems that I think aren't otherwise getting solved. Are you AI's equivalent to a climate skeptic? Well, certainly not to a climate skeptic. I mean, I'm a skeptic. I mean, I think climate change is real. I think AI is real. But I think that we're not where people think that we are and that we need to sort of have a realistic view about where we are and then have a plan to go forward so that we can achieve the kinds of things that people are talking about at this summit. So the idea of this summit is we're going to use AI to help with all the sustainable development goals, for example. And we can help in small ways now, but to the extent that they depend, for example, on real technological progress, we're not there yet. The question is, how do you move things forward? Do you feel you're amongst friends here? Do you feel you're going against the grain? Well, I expected people might not want to hear the criticism, but instead, I've gotten an overwhelmingly positive reaction. So many, many people at this conference have approached me and said, like, how do we do this? How do we build this AI for CERN? So there's a lot of excitement from government officials and people from the X Prize Foundation and so forth to take this idea seriously from corporations. I've been overwhelmed by how much positive response I've gotten. Gary, thank you. That's Gary Marcus with a whole new fresh spin on this three-day summit and getting a lot of people talking. Thanks again. Thanks very much for the skeptic calling the skeptic. Skeptical.