 Live from the Congress Center in London, England, it's theCUBE at MIT and the Digital Economy, the second machine age. Brought to you by headline sponsor, MIT. Hi everybody, we're back, this is Dave Vellante with Stu Miniman, Eric Green-Yallson, and Andrew McAfee are back here. After the day, each of them gave a detailed presentation today related to the book. Gentlemen, welcome back, good to see you. Eric, I want to start with you on a question that the last question that Andy got from a woman in the audience. When you're starting with him, on a question that was asked of me, yes, and you'll see why in a minute. You're not going to see it on the show. Well, you dodged the question, as you recall. By the way, for the record, hanging out with you guys makes us smarter. Thank you to know that. Glad to hear it. So the question was around education. She expressed real concern, particularly around education for younger people. I guess by the time they get to secondary education, it's too late. You talked about in the book about the three Rs. Yeah. We need to read, obviously. Do we need to write? Do we need to be able to do arithmetic, take in our head? Well, sure. What's your take on that question? You know, those basics are table stakes. I mean, you have to be able to do that kind of stuff. But the real payoff comes from creativity, doing something really new and original. The good news is that most people love being creative and original. You look at a kid playing, whether if they're two or three years old, that's all they do. You put some blocks in front of them, and they start building, creating things. And our school system, as Andy was saying in his talk or his questions, was that many of the schools are almost explicitly designed to tamp that down to get people to conform, get them to all be consistent, which is exactly what Henry Ford needed for his factories, to work on the assembly line. But now that machines can do that repetitive, consistent kind of work, it's time to let creativity flourish again. And that's what we've got to do on top of those basic skills. So I have one- And it's pretty clear that our primary education model is really hard for some kids to accept. They just want, they want to run around. They want to go express themselves. They want to go poke at the world. That's not what that grid full of desks is designed to do. We call that ADD now, right? Yeah, it's a pathology, yeah. I have one Montessori kid out of my four. Are you really? He's the, by far, the most creative, the most auto-didacted. You're a Montessori child. The Marie Montessori have it right. Is that part of the answer? Look, I'm not an educational researcher. I am a Montessori kid. I think she got it right. And she was able to demonstrate that she could take kids out of the slums of Bologna who were at the time considered mentally defective. There's this notion that the reason the poor were poor is because they were just mentally insufficient. And she could show their learning and their progress. So, I completely agree with Eric. We need, all of our students need to be able to accomplish the basics. To read, to write, to do basic math. What Montessori taught me is you can get there via this completely kind of hippie, free form route. And I'm really happy for that education. So Eric, you're talking about how you and your students each year brainstorm on things that people can do that computers can't. Yeah, that's a lot of fun. This is an exercise that you do pretty regularly. How has that evolved over the last couple of years? Sometimes we do it more systematically. I almost always do it in an ad hoc. You know, everywhere with a forum, it's a kind of dinner conversation I almost can't get away from. So, we hear it a lot. And there's some recurring patterns that emerge. And you heard some of them today around interpersonal skills, around creativity. Still coordination is still physical coordination. What some of these have in common is that they're skills that we've evolved over literally hundreds of thousands or millions of years. And there are billions of neurons devoted to some of these skills, coordination, vision, interpersonal skills. And other skills like arithmetic are something that's really very recent. And we don't have a lot of neurons devoted to that. So it's not surprising that machines can pick up those more recent skills more than the more innate ones. Now, over time, will machines be able to do more of those other skills? I suspect they probably will. Exactly how long it will take? That's a question for neuroscientists and AI researchers. Let me make that concrete. Think about not just diagnosing a patient, but getting them to comply with the treatment regimen. Take your medicine, eat better, stop smoking. We know that compliance rates are terrible for demonstrably good ideas. How do we improve them? Is it a technology solution? A little bit. Is it an interpersonal solution? Absolutely. I think we need deeply empathetic, deeply capable people to help each other become healthier, become better people. The right program might come from an algorithm, but that algorithm and the computer that spits it out is gonna be lousy at getting most people to comply. We need human beings for that. So when we talk in a technology space, we've been evangelizing that people need to get rid of what we call the undifferentiated heavy lifting. And I wonder if there's an opportunity in our personal lives. You think about how much time we spend, what are we doing for dinner when we're running the kids around? How do I get dressed in different things? And here there's studies sometimes that we waste so much brain power trying to get rid of these things. And there's opportunities. You look at the Jetsons, they didn't have these problems. But tech can help us with some of that. I think people should actually help us with other of it. I have a personal trainer and he's one of the last people that I would ever have exclude from my life because he's the guy who can actually help me lead a healthier life. And I place so much value on that. I like your metaphor of this, the undifferentiated stuff that really, it's not the stuff that makes you great. It's just stuff you have to do. And I remember having a conversation with the folks at SAP and they said, you know, I'm not sure we'd like to brag about this, but we take away a lot of that stuff that isn't what differentiates companies. You know, the back office stuff, getting your basic bookkeeping, accounting, supply chain, stuff done. And it's interesting. I think we could use the same thing for our personal lives. Let's get rid of that sort of underbrush of necessity stuff so we can focus on the things that we're uniquely good at. All right, so why do we have to run out when I need garbage bags or toilet paper? Honestly, a drone should show up and drop that on my front door. So I wonder when I look at the self-driving car that you've talked about, will we reach a point that not only do we trust computers and it's cars to drive ourself, but we will reach a point where we're just not going to trust the humans anymore because the self-driving cars are just so much safer and better than what we've got. Is that coming in the next 20 years? I personally think so, and the first time is deeply weird and unsettling. I think both of us were a little bit terrified the first time we drove in the Google Autonomous car and the Googler driving it, hit the button and took his hands off the controls. That was a weird moment. I liken it to when I was learning the scuba dive. The very first breath you take underwater is deeply unsettling because you're not supposed to be doing this. After a few breaths, it becomes background. Right, but as I was driving to the airport to come here and I look in the lane to the left of me, there's a woman texting here and I'm like, I'd be much happier if she wasn't driving if the computer was done because then we could be more productive. That's the right way to think about it. I think the time will come and it may not be that far away where the norms shift exactly the other way around and it'll be considered risky to have a human at the wheel and the safe thing that the thing that the insurance company will want is to have a machine there. I think there's a temporary phase with new technologies where you become frightened of them. When microwave ovens first came out, I'm sure they were weird and wonderful. Now most of us think of them as really kind of boring and routine. The same thing is going to happen with self-driving cars. Well, the stuff that I've got is that what? Two accidents? You guys were on one? Well, that's the story is that is, but none of them were, of course, according to the story. One was of the human driving. What's clear is that they are safer than a human driver as of today. They are only going to get safer. We are not evolving that quickly. But Eric, you've got the question, has that self-driving car driven on Starwood Drive where you last because we live in Boston? But your answer was, it will drive on Starwood Drive and drive it. Eventually, I think it's fair to say that there's a big difference. The first 90, 95, even 99% of driving is something that's a lot easier. That last 1% or 100% of 1% becomes much, much harder. And right now, there's a car just last week that drove across the United States, but there were half a dozen times when it had to have a human intervene in some particularly unusual situations. And I think because of our norms and expectations, it won't be enough for a self-driving car to be safer than humans. We'll need it to be 10x safer or something like that. But maybe like the chess example, maybe the ultimate combination is a combination of human and self-driving car. Maybe. Situation after situation. I think that's going to be the case. And I'll go back to medical diagnosis. I would, at least for the short to medium term, I would like to have a pair of human eyes over the treatment plan that the completely digital diagnostician spits out. Maybe over time, it'll be clear that there are no flaws in that. We can go totally digital, but we can combine with two. I think in most cases, what Andy said is right and what you brought up. But the case of self-driving cars in particular and other situations where humans have to take over for a machine that's failing for some way, like an aircraft when the autopilot isn't doing things right, it turns out that that transition can be very, very rocky and expecting a human to be on call to be able to quickly grasp what's going on in the middle of a crisis. Of a freak out, but that's not reasonable. Isn't necessarily the best time to be switching over. So there's a human factors issue there of how you design it, not just so the human can take over, but that you can make a kind of a seamless transition. And that's not easy. Okay, so maybe self-driving cars, it doesn't happen, but back to the medical example, maybe Watson will replace Dr. Welby, but perhaps not Dr. Oz. Yeah, interaction. Or a nurse, or somebody who actually gets me to comply again. But also I do think that Dr. Watson can and should take over for people in the developing world who only have access, instead of first world medical care, they've got a smartphone. Okay, we're going to be able to deliver absolute top shelf, world class medical diagnostics to those people fairly quickly. Of course we should do that. And then combine it with a coach who gets people to take the prescription when they're supposed to do it, change their eating habits. Or a community to say. Or whatever it else it is. Your peers are all losing weight. Why aren't you? I want to ask you something, we've got a lot of time here. You guys have been gracious with your time. But Andy, in your talk, you were very outspoken about a couple of things. I would summarize it as Elon Musk filled gates and Stephen Hawking or paranoid. And there's no privacy in the internet. So get over it. I didn't say there's no privacy. No, no, I think it's important to be clear on this. I think privacy is really important. I do think it's a right that we have and we should have. What I don't want to do is have a bureaucrat define my privacy rights for me and start telling companies what they can and can't do as a result. What I'd much prefer instead is to say look, if there are things that we know companies are doing that we do not approve of, let's deal with that situation. As opposed to trying to put guardrails in place and fence off of different kinds of innovation. Or strict growth. Right, I mean, there's two kinds of mistakes you can make. One is you can let companies do things and you should have regulated them. The other is you can regulate them preemptively when you really should have let them do things. And both kinds of errors are possible. Our sense of looking at what's happening on the internet is that we've thrived where we allow more permissionless innovation. Where we allow companies to do things and then go back and fix things rather than when we try and lock down the past and the existing processes. So our leaning in most cases, not every case, is to be a little more free, a little more open. Recognize that there will be mistakes. It's not going to be that we are perfectly guaranteed. There's a risk when you walk across the street. But go back and fix things at that point. Rather than preemptively define exactly how things are gonna play out. Let me give you an example. If Google were to say to me, hey Andy, unless you pay us X dollars per month, we're gonna show the world your last 50 Google searches. I would completely pay for that kind of blackmail, right? Your search history is incredibly personal, reveals a lot about you. Google is not going to do that. It would crater their own business. So trying to fence that kind of stuff off in advance makes a lot less sense to me than relying on, then this sounds a little bit weird, but a combination of for-profit companies and people with free choice, that's a really good guarantor of our freedoms and our rights. So you guys have a pretty good thing going. It doesn't look like you're gonna strangle each other anytime soon. Not soon, no. All right. How do you decide who does what? Betrayed by how you operate. Reading the book, it's like, okay, I think that was Andy, because he's talking about Eric, or I think that was Eric, he was talking about him, but otherwise I couldn't tell. I think it'd be hard for you to reverse engineer because it gets so commingled over time. And I gave the example at the end of the talk about humans and machines working together synergistically. I think the same thing is true with Andy and me, and he made a disagree. But I find that we are smarter when we work together. So much smarter. Then when we work individually, we go and brainstorm things on the blackboard, and I have these aha moments that I don't think I would have had just sitting by myself. And do I attribute that aha moment to Andy? To me, it's actually to this borg of us working together. And fundamentally, and these are bumper sticker things to say, if after working with someone, you become convinced that they respect you and that you can trust them. And like Eric says, that you're better off together than you would be individually. It's a complete no-brainer to keep doing the work together. Well, we're really humbled to be here. You guys have great content. Everything's free and available. We really believe in that sort of Wikonomics. Thank you very much for your time and having us here. Well, thank you so much. It's a real pleasure. Great, all right. Keep right there, everybody will be back to wrap up right after this. This is the Q Rely from London, MIT IDE, right back.