 Thank you Eric for that. I'm a little bit worried though. Did you notice Eric's word choice? He didn't say that I make him smarter. He said that I make him feel smarter. I'm not sure that's entirely better. I'm gonna go with a positive interpretation on that. After having spent about the last four years joined as heterosexual life partners, I'm gonna try to be upbeat about about the partnership. It's been a fantastic ride and in fact what I want to talk about is not the research that led to the book. What I want to talk about are six things that I've learned since the book came out because we've gone on a kind of a crazy ride since the publication of the book in January of 2011. It feels like we have not touched ground for more than about 18 hours at a stretch anywhere since and it's been just an incredible ride since the book came out. I think I've learned six things since the time of publication. I want to talk about those six things. The first thing is this was I was not anticipating. I have learned what my favorite technology in the world is and I was not expecting this because your favorite X is a really hard question to answer. I can't name my single favorite book, my single favorite movie or TV show and while we were doing the research for the book, I thought that I would never come to a conclusion about what my favorite technology is because we came across so many amazing ones. This is my picture called Dorks in a Sandbox. This is just the happiest that you'll see two nerds ever become is when they get to take a ride in a Google Autonomous car. This was an amazing technology. Eric also talked about Baxter. The two of us got to meet Baxter before he was introduced to the world and this is actually a 100% true story. We were getting shown Baxter by Rodney Brooks. Our colleague could develop this technology and it was honestly just days before Baxter was going to be unveiled to the world so that he had Rod had a lot of calls from the press. So he got a call. He said I have to take this and he left me and Eric alone in front of the robot. And we said this thing is supposed to be so easy to program. Let's put this to the test. So it's got a bunch of little cones and things that you supposed to be able to have Baxter manipulate. So we said grab we tried to understand what Rod had taught us and we got Baxter to grab the little cylinder and then Baxter picked it up and moved the cylinder and then it stopped. To our eyes Baxter kind of hung and froze and we're thinking this is not a great demo. You know that's I'm not sure this technology is ready for primetime. And then Rod came back and he said I see you guys are trying to program a classic pick-and-place operation for a robot. You've told it what to pick. Have you told it where to place that? We said no. Eric and I between us have eight degrees from MIT and Harvard. We concluded that was part of the problem not part of the reason we weren't doing any better. But Baxter was a pretty unbelievable technology. We were both honestly too frightened to even play Jeopardy against Watson because we knew what the outcome was going to be there. We let our students do that and take the take the fall for us and take the fall for humanity. The one other really cool technology I've come across this is my buddy Asaf Biedermann. He came out of the MIT sensible city lab and he has commercializing a technology that you see on that bike wheel there called the Copenhagen wheel. That is a power boosting device you add to your existing bicycle and you ride it around and you kind of feel either like Superman or that you're always going downhill because of the power boost that you get. This is a beautiful technology. It's super clean. It shrinks cities. It gives us exercise. It gets us out of doors. This is an unbelievable technology, but it's not my favorite because what I realized after more than a year of traveling around to Silicon Valley and Washington and New York and Paris and London and Davos and Munich and Hong Kong and Singapore my single favorite technology is anything that gets me through an airport faster. You can complain about the travel, but one of the great benefits is that you really get to learn what's on people's minds. What they want to talk about when they're confronted with the concept of the second machine edge and the ideas in the book. So we've had amazing conversations with lots of different constituencies. Every time we talk about this parents come up and they normally say something to me like, I think it's really important for my kid to get educated properly for the second machine age, but they seem disengaged at school right now. And my answer is, yeah, that's the point. I've become kind of a radical about education reform because it feels to me like we're doing a great job of turning out the kinds of workers that we needed about 50 years ago for our economy and our primary education system really feels to me like a really well-tuned machine to turn out people with fairly basic skills. They can read, they can write, they can do basic math. But I think more fundamentally they get trained to sit in the same place and obey authority all day every day. And I think you want that if you want to turn out a workforce of payroll clerks and assembly line workers, we don't need those kinds of workers very much anymore. And our educational systems, I believe, are really poorly matched for the kinds of people, the kinds of skills and attitudes we need for the second machine age. Talking with parents is a fascinating experience. We've also had some great chances to talk to politicians all over the world at some pretty high levels. One of us got invited to the White House to have lunch with President Obama, and I don't want to brag because it wasn't me. But we get to talk to politicians and the good news is that they're willing to engage with the ideas in the book. The discouraging news is the typical conversation is they ask you to lay out the ideas and you talk to them about the thesis of the book and they say, yeah, got it. Okay, now more than ever we need the policies I've been advocating for my entire career. You don't see a lot of mindsets getting changed. You see people doubling down on their prior beliefs. So the politician, the fact that we're having conversations with politicians is encouraging. I wish I sensed a little more open-mindedness among a lot of them. In contrast, the happy news is that when we talk to pretty senior business leaders around the world, they've been really willing to engage with the ideas in the book, not just to make their own companies run better, although there's absolutely a lot of that. They really are catching on to this idea that as Eric points out, if median income is stagnating, if job growth is tailing off, that's not really great news for their markets and for their societies. So instead of coming across this group of heartless capitalists toasting the demise of the workforce, we're experiencing something really different, which is the executives that we've been talking with really grappling with the ideas and trying to figure out what their role is as leaders of the private sector, what's incumbent on them. So we're seeing the rise of, you can call it, conscious capitalism as something that we've heard of an emerging different mindset among some leaders, and I think it's fantastic news. I want to talk about two final constituencies that we've been talking with and a really interesting contrast between them. Most of the discussions that we're having with our academic colleagues, and in particular some of the smartest economists that we work with, they have a pretty interesting critique. They say, you guys, you're being kind of wildly provocative and you're way out there with the things that you're talking about. And it feels to me like they take a lot of comfort from the pattern of history, which is we've had a lot of economic growth, we've had a lot of technological progress, and we've had something pretty close to full employment and constant wage growth to the point that economists now have, when they really think that something is set in stone, they ascribe a fallacy to it. So there's a thing called the Luddite fallacy, or the lump of labor fallacy, which is this idea that tech progress can race ahead and leave people behind. Lots of folks consider that a fallacy. So they look at the ideas that we're talking about and they say, come on guys, we've seen this before. Marx was worried about it. Cain's talked about technological unemployment. There's this long history of people being wrong about this. The trends that you're seeing, are they a blip or are they for real? You guys are being a little outrageous with your claims. Most of the professional technologists that we talk to, the investors in the entrepreneurs in Silicon Valley, say something pretty different. They say you guys are being way too conservative with your claims. The things that we're building, the things that we're investing in are going to continue to transform the work face. They're going to continue to advance and acquire new skills and capabilities that used to belong to people alone. We think they're going to diffuse not over the course of decades, but they're going to diffuse pretty rapidly over the next few years. And the labor force, the workforce implications that you guys are talking about, again, we ain't seen nothing yet. So you guys are being way too circumspect. People are way too calm about the waves of change that are coming. I'm not sure who's right between the two of them, but the technologists say something else that's pretty interesting to us. They say we're sensing a broad danger out there in public perception. And the danger is we really don't want Silicon Valley to become the next Wall Street. In other words, a next focal point for popular ire about the economy and the next easily demonized set of bad guys out there. So a lot of the very prominent technologists we talk to are seeing a danger out there. I don't think they're crazy to think that. I was stunned to learn that out in the Bay Area, out in San Francisco and Berkeley, there are active, violent property damaging protests against the buses that take Google and Facebook employees down to their jobs in the valley. And we were talking to a pretty senior executive at Google about this, and he said, think about this. If you had told me a couple years ago that the zero carbon footprint buses that we make available to take our employees down to work and therefore less in traffic jams on the 101. If you had told me that those things would be the target of public protests and smashed windows, I would have thought you are crazy. But there's a changing conversation about technology out there. I think it's actually really troubling news. The economists love to talk about pie, and as Eric discussed and as he showed a picture of a pie, we believe that technological progress grows the pie like nothing else. It is the only free lunch that economists believe in, and I think it's the best single best economic news on the planet. So the fact that some people are trying to demonize or starting to demonize technology, it's a little frightening to me. In some cases, we technologists, I believe, are contributing to the situation and making it worse than we need to. Stephen Hawking here is talking about the deep, the dire dangers of artificial intelligence. And some technologists are hopping on this as well. Elon Musk is a first-rate technologist by any measure. He's using pretty apocalyptic language. We're summoning the demon with the kinds of progress that we're making with artificial intelligence. I really, I don't agree with this, and I find this really unhelpful because it's contributing to the popular unease about technology. Let me try to explain why I don't agree with this very much. The reason this concern is rising among people like Stephen Hawking and Bill Gates and Elon Musk is because just in the past couple years, artificial intelligence has started delivering on the promises that it's been making for about half a century. The discipline is, let's call it a half-century old. And artificial intelligence has been promising human-like reasoning, human-like abilities. When you look at some of the things that Eric described, when you look at Watson in Siri, when you look at demonstrations of computers that can play video games better than any person, you start to think, maybe we're getting there. So it's a really important question. Are we summoning any demon or not? My favorite way to think about this, there's an amazing company called DeepMind, started here, right in London, was bought by Google last year. And one of their founders is a guy named Demis Hassabis, who is just one of the stars, young stars of artificial intelligence work. He's got a lovely way to think about this. He says, for as long as we've been trying to get computers to do human-like things, we've realized there are a number of really, really hard problems. How do we understand how memories are stored in the brain, and how do we do that in a more brain-like way in computers? How do we do object recognition? How do we give computers common sense? So the field has realized for a long time that there are, let's call it roughly 10, fundamental really, really thorny challenges. For half a century we've known that. We've made rounding error no progress on those 10 fundamental challenges. What Demis says is, we're now on the first rung of the ladder. We're starting to climb up that ladder. So call it rung number one of a 10-rung ladder. So I did a little math and I said, Demis, if what you're saying is correct, we have nothing to worry about because it took half a century for the first one. If that's our pace of making progress on these fundamental challenges, we've got nothing to worry about for hundreds of years, right? He said, aren't you the guy that talks about exponential progress? If what I'm saying is right, the rate of fundamental innovation and fundamental learning is about to increase a lot with technological progress and the deep challenges of artificial intelligence. So the hits, the breakthroughs are going to come a lot more fast and furious. I don't know if that's the right way to look at it. I don't think anyone knows if that's the right way to look at it. But when I've looked around and I've talked to some of my favorite technologists about where we are and how quickly are we summoning any demon, it's not right around the corner by any stretch of imagination. And my favorite way to try to prioritize the challenges that are coming up in the wake of the second machine age, if the trends in the workforce that Eric talked about continue, the people are going to rise up way before the machines do. We're learning that as we try to make computers and robots with artificial intelligence do more and more human things, we're learning that some paradoxes are vanishing and other ones remain really, really thorny. These are a couple of the fundamental challenges out there. And it's a nice way to encapsulate where we are making progress and where we are. Where we're not making progress is on a long-standing challenge called Moravec's Paradox, pointed out by the computer scientist Hans Moravec a while back. He says it is a lot easier to automate most of what a PhD mathematician or physicist does than it is to automate what a three-year-old does. Object recognition, physical dexterity, the ability to walk across a room and not trip. We still don't have technologies that can do these kinds of things, despite working on them for a long time. I mean this seriously. There's no robot anywhere in the world that can do that. That's really far off in the distance. We're manipulating pliable materials. It requires a great sense of touch and feedback and fine motor control. We're getting better. We're still not there. We are a long way away from any technology that could clear any of the tables in this room. So whether or not restaurant Busboy is a well-paid job or a prestigious job, it's a job that's going to be around for a long time to come, exactly because of Moravec's Paradox. So we're not making massive progress here. Where we are making ridiculously, unexpectedly fast progress is on another really long-standing one. This is Polanyi's Paradox. And it was brought up by an absolutely brilliant scientist and philosopher of science of the 20th century, named Michael Polanyi, who summarized in one sentence this really weird situation where we know more than we can tell. And what he meant by that is if you go query a human, especially an expert, somebody who's really good at her job, and you ask her to fully describe how she's able to do her job so well. And let's say she wanted to help you out. She really couldn't do very much. She could describe some things that would be helpful and accurate, but there's so much of what we know how to do that we can never articulate, that it remains kind of locked up in our heads, it's called tacit knowledge. And all of our attempts to elicit that are just really underwhelming and really frustrating. And it provided this, you can think of it as a digital ceiling, a hard digital ceiling that distinguished the kinds of tasks that we could automate versus the kinds of tasks that we could not automate, that we could not put into technology or software. One of the clearest examples of Polanyi's paradox was the really great Asian strategy game, the board game of Go. Go is the strategy geek's favorite board game by far because the rules are if I surround your stone, I take it off. I've just told you all the rules of the game of Go. No one can tell you all the rules about how to play the game of Go well. People study this for decades, you rise up through the rankings, and as you get better, you obviously can beat most other humans. What you can't do is in any deep way explain to somebody how you know what you're doing or how you know what move to make next. To the point that as recently as June, May of last year, this article was published, confidently saying that human dominance in the game of Go was going to remain for years to come. As far ahead as we could see, humans were going to be the planet's best Go players exactly because of Polanyi's paradox. And they had a great quote from a Go master in the article. He said, look, I look at the board, I know what the smart next move is. I could not tell you how I know what that smart next move is. Polanyi's paradox was alive and well. The new approach in artificial intelligence, the dominant approach, the one that's just taking over the world, is called deep learning. And the way deep learning works is you don't even try to understand the rules as the programmer or to tell the computer what the rules are. You just show it a lot of examples and you let the network that's inside the system figure it out for itself. You're not doing any feature engineering is what they call it. So a little while back, a team, again, here in the UK, took a look at Go and they said, hey, what's fascinating about Go? It's been around for so many hundreds of years. There's a large library of really top level Go games. Here's what we're going to do. We're going to show that library to the computer. And we're going to let the computer, the deep learning system, absorb the insights from that library in a way that we, the programmers, never ever could. We don't know what to do. Then we're going to start playing games against a real human, a decent Go player. And we're going to see if the computer learned. And the way they tested that was fascinating. They said, if the system makes the same move that the human did, the human expert did in the actual game, we'll say that's the system getting pretty good at the game of playing Go. More than 70% of the time now, that system is making the exact same next move as would have been found in the library. So I tweeted out earlier this year a prediction that I'm actually pretty confident in. I think by the end of 2015, the world's best player of Go will no longer be a human being. I think that is going to go digital. And it's an example of how quickly the progress is happening here. It brings up this broad insight that I had. Even after writing two books with Eric and immersing for years, immersing myself in this world of technological progress, I still get surprised about how it seems to keep on accelerating. So the cute way to say this is that objects in the future, especially technological ones, are closer than they appear. And Eric and I have noticed over and over again, even over the past year, when we go to conferences and we hang out with our favorite technologists and geeks and we ask them how quickly their companies or their disciplines are advancing, we keep hearing the same thing, which is, wow, quicker than I ever would have thought. I want to give you an example of that. I'm going to go back to the autonomous car one more time. Eric talked about the experience that the two of us had riding in the car and he said at that time, there was a lot the car could not yet do. It was really good at driving down the 101 in Northern California. We had a comment from the floor, could it handle driving on the craziness of Storo Drive in Boston? Let me show you where autonomous cars are right now. This is a talk that Chris Ermson gave at TED this year. Chris is the head of Google's Autonomous Car Project and he gave a really vivid illustration of how far these technologies have actually come and how quickly. He said that they have completely autonomous cars driving around the streets of Mountain View, California, a lot. And they're encountering some situations they never would have anticipated. They certainly never programmed into their technology. He showed a video. I'm actually not making this up. He showed a video of what the car saw when it was driving around downtown Mountain View and saw a woman in a wheelchair come into the street and chase a flock of ducks with a broom. This happened, right? What the car did not do was run over the woman. What the car did not do was run over the ducks the car did not even say to its human occupant I really have no idea what's going on here. Would you please take over from here? Instead the car said, I got this, waited for the ducks and the broom and the wheelchair to go off the street and then picked up and accelerated smoothly down the street. A situation that weird presented no problem to the kinds of technologies that we have now. So an answer to the question about Storo Drive or almost any place else? Yeah, I think these cars are ready for prime time even way before most of us who are looking at this technology thought we would ever get there. The last learning I want to share is one that you've heard over and over today probably shouldn't surprise anybody. Eric and I sit in the best place on the planet to do this kind of work. And I want to say that, well let me actually make this a pop quiz for everybody. Would you please shout out what you think the most overused word in academia is? That's a good one. Massively overused. We've been guilty of that today. Could probably... MIT. No. MIT is the most underappreciated word in academia. I heard it over here. Interdisciplinary. Everyone at every university talks about how interdisciplinary their work is. It's usually not the case. We love to stick in our little disciplinary silos and publish work in the same old journals and go to the same conferences and talk to the same people. We're a deeply siloed industry. It's one of the things that I hope will be disrupted about academia. What we've been seeing at MIT over the past couple years is just a really happy exception to that. Eric and I and Marshall and Rigobon, so I guess this is not the perfect example of our lineup today, we hang out primarily at the Sloan School of Business at MIT, but we're joined by this constellation of amazing people from throughout the institute. One of MIT's great resources is the Media Lab. And I have to share one more story with you. I want to show you some data that Joey Ito gathered. Joey's the head of the MIT Media Lab. And I think of the lab as this collection of kind of fiendish geniuses who keep on poking at the rest of us and showing us fairly uncomfortable truths. So Joey shared a pretty uncomfortable truth about education these days, and he and his colleagues did it in a really ingenious way. They just put a simple skin electricity sensor on a student and then left it on for a week. And it turns out that the electrodermal signal is a really good, very quick, shorthand measure of engagement with the world. So they put one of those sensors on one of their students. They let her go on about her work for a week and I need to highlight for you, this is when she was in class. Do you see that? She's almost dead, right? It's pretty hard for anything less to be happening in her brain while she's supposed to be absorbing the cutting edge knowledge of science and engineering. The blue there is sleep. Look at sleep compared to be... This is a joke, right? So part of the reason I love Joey in the Media Lab is they keep on doing things like this to us and confronting us with these kinds of facts. We also have just... Look, I'm sorry, this is not hyperbole. We have the best economics department in the world. There was just a quick study published. They've been giving out a medal called the John Bates Clark Medal for the best economist under 40 for... Eric, when did that start? 40s or 50s? They've given scores of these medals out. About 40% of the recipients have some MIT connection or other. It's dominance like you would not see anyplace else on the planet. So we have our economist colleagues if we want to understand labor economics or why nations fail or any of these really big topics. We've got the world experts in the econ department and something parallel is going on in the computer science and the artificial intelligence lab where we've got some of the people who helped literally invent the disciplines of robotics and artificial intelligence. And we have a couple different seminar series going on. There's one that brings together the roboticists, the business school people, the economists and the artificial intelligence geeks. It has been the most consistently well attended, fascinating seminar that I've ever been part of in my academic career. And all of us keep showing up not because we've got tons of idle time, but because we're so fascinated by the topics that are going on here in the second machine age. So I know you're sick of seeing this slide. I want to put it up one final time and just reemphasize that the strengths, the disciplinary, the interdisciplinary strengths at MIT that we have to go tackle these kinds of questions exist absolutely no place else on the planet. And I was an undergraduate and a master's student at MIT. I strayed, I spent years down the road hanging out with a different sort of crowd. I came back home a few years back and nothing feels better than being in your place and being among your people. And that's how I feel at MIT. Let me stop there. I would love to take some questions. Yeah, please. To what extent do you agree or not about the disenfranchisement of people who do oppose the Google buses, for example, because they believe that their data is being given away for free and somebody else is making billions with it? Is that the real cause? Yeah, I always ask them if they refuse to watch TV for the same reason. They were giving away their eyeballs in exchange for getting something for free in return. I want to be clear. I do think there are privacy concerns that come up in this world of extraordinarily big data and powerful sensors and governments who are telling us one thing and doing something else. I think there are legitimate concerns about that. What I think, though, is that the companies like Facebook and Google are duping us into some kind of... some illicit or some kind of unethical bargain. I understand that Google takes my data and shows me ads based on it. I honestly get that. What I don't think is that everyone else who uses Google is such a moron that they're not aware of that bargain. So when I listen to those activists, I hear this kind of... this almost paternalistic concern about people who aren't astute enough to realize that they're being shown ads. Yes, they do. The ads are there on the page all the time. So I do think that we have privacy and security concerns absolutely. The principle that I fall back on is not let's have some... let's have some Politburo, let's have some bureaucrat decide what is and isn't okay. Let's instead rely on the principle that sunlight is the best disinfectant. Let's get knowledge about what is going on and let people make informed choices for themselves. Yeah, please. Can you wait for a microphone, please? Do we have one? Oh, I'm sorry to take it away. We'll give it right back. Sorry for open that. Just further on that point, when you look at the impact of second machine age, the impact of regulation, either around data or maybe around things like net neutrality and so on, and how those... I won't say external factors but how those big factors are impacting what you're predicting. The single best piece of work that I know of on that exact question was done by our MIT colleague, Catherine Tucker, who looked at what happened to the online advertising market in Europe after the European regulators decided that there was too much tracking going on and that all the operators had to scale that tracking back. It was done for perfectly good intentions, I think. The effect was really clear. The advertising became less effective, which meant that we had to take up more of each screen that all of us looked at with advertising to maintain the revenue. So in the wake of that legislation, our screens became even more of an unpleasant mishmash of ads and blurring nonsense and whatnot, simply because the effectiveness of it went down. We had to get our revenue elsewhere. I take some insight away from the fact that after the publishing industry in Spain successfully lobbied the Spanish government to force Google to stop showing snippets from Spanish newspapers on Google News. Surprise, surprise. Traffic to those newspaper sites went down and then they lobbied to have that ruling overturned. In general, I'm not saying there's no need for regulation in any areas. In general, I'm a big fan of permissionless innovation and I'm a big fan of making sure that we know what the problem is and what we want the remedy to be before we start wielding the instrument of regulation. Yeah, now we're back to you. I'm sorry. Great, thank you. So I think Eric talked a bit about labor arbitrage which is essentially kind of how China and India, the growth there really started. So given what we're talking about second machine age and the differential of that going down, would you see that to happen less? So the offshoring that has really defined kind of economic growth in other countries would that stop after this has all happened? Would it come back? I mean, what's your view on that? One of the happiest phenomenon of the past couple decades has been the movement out of dire poverty around the world of the people at the absolute base of the pyramid. And it's just, I think it's probably the second best economic news on the planet is the reduction in terrible poverty because of markets and trade and because of companies like countries like India and China realizing the benefits of markets and trade and globalization. So that's been a wonderful phenomenon. The question you ask is what happens to be a little bit cute about it when the rising wages at the bottom of the pyramid meet the declining costs inherent in Moore's law? That's going to be a very, very interesting collision. What we're already observing is a phenomenon that some people call deindustrialization and what they mean by that is the classic route to prosperity in the 20th century and we can think of the countries that became prosperous, Taiwan, South Korea, Japan. They did it by following almost the same pattern. They went through a period of pretty heavy manufacturing, pretty heavy industrialization where a big percentage of their population was basically working in a factory. Then they developed a service sector that went down but the hump was a pretty high hump. What we're seeing more recently as companies are trying to come out of the bottom of the pyramid they're not going through that same period of heavy industrialization. The peak is a lot lower than it used to be. What's the new path to prosperity? That's a lot tougher question. But if you're looking at global growth and improvement in living standards, industrialization was a really, really good pattern. We're seeing less of it than we used to. Another cute way that I've got to talk about this is when I think about the problems here in the developed world and we've got our share of economic challenges, medium termish, I would rather have our challenges than China's challenges, for example. Yeah, please. Yeah. Andy, you and several of your colleagues have talked about the pace of progress and how it's gone very fast and appears to be speeding up exponentially or at least at some very rapid pace. Do you see or have you seen in all of your conversations a conflict between the pace of technology, AI change, and what people actually want to deal with? Yes, is a short answer. There's a pretty big conflict. I'm supposed to do this for a living and once in a while I find myself muttering under my breath like would you give it a rest for a while so I can catch up with everything that you've done in the past week and feel moderately on top of the situation again. So I do think individuals have a hard time keeping up with the bounty that's coming in the second machine age. I think that phenomenon is dwarfed by the difficulty that organizations and that institutions and that policy have in keeping up with the second machine age. And we really, again, the solution I believe is really clear. We can't slow down, nor should we try to slow down the pace of tech progress. We need to increase the clock speed of ourselves and our institutions to respond effectively. I guess what I was trying to get at was do you see any pushback coming from the conflict between the pace of innovative change on one side and the pace with which normal everyday people can keep up with that? I do, and that's why I tried to be explicit about the protests that we're seeing in San Francisco and Berkeley and some of the resistance that's rising up to technology. I think some of it is this kind of generalized anxiety that a lot of us have that, you know, stop the world I want to get off. Things are just moving a little bit too quickly. I wish there were an easy answer to that. I think responsiveness and receptivity is our only real way forward. Trying to fence off tech progress, we've got plenty of evidence about how well that works. You can do it if you want to immiserate your people. Thanks. Where's our mic next? Justin, go ahead, right there. Thank you. I think you mentioned your book, The Winner Takes It All, Tendency. I think it's probably human nature. I mean, with the Googles and the Facebooks and the company, how do you see the future and the role of startups? The pattern that I think has held true in the high-tech industry is one of dominance and then disruption and the creation of one or a small number of incredibly valuable, incredibly powerful companies, think Microsoft, think IBM beforehand, and a lot of us worry that that company is so powerful that we need to roll out antitrust regulation, we need to be concerned about that, we need to go after them in one way or another. The market tends to take care of fat, lazy monopolies in the technology space, and it tends to take care of them in sometimes with remarkable speed. So when I look at today's really powerful tech companies, I can't imagine what's going to unseat them. My imagination is just not good enough to know is Google's death knell out there and if so, what is it? Is Facebook passing? I honestly don't know. I'm a little bit less concerned than a lot of people that these are the new industrial giants that are going to dominate our lives for all time going forward because I'm pretty sure there's a group of a half dozen weird 20-somethings backed by somebody else and that's what's going to keep that kind of sustained dominance from happening. That's clear. I'm not saying that we don't need to worry at all because all great concentrations of power require vigilance and require scrutiny. So I'm really happy that Jean Tirol got the Nobel last year for his work on market power and antitrust. I think that's incredibly appropriate. Some of our technologist colleagues have a little bit of a, oh, you silly antitrust. Don't you worry about technology? They tend to love monopolies a little bit too much, especially if there are early stage investors in them. But I'm calmer than a lot of other people that I talk to about the pattern because I think we know what happens in high tech. Getting complacent, not scanning the tech landscape, not taking brilliant care of your customers, you're going to go away very quickly in this world. Do we have a... Yeah, go ahead. From your comments on Michael Polanyi and the principle that what is codifiable can be automated while tacit knowledge cannot. My question is, why do the majority of companies seem to make hiring decisions based on candidates' codifiable skills? It's such a great question. When you look around with kind of a second machine age lens on things, you are amazed at how many antiquated practices you still see. How much of our industrial era mindset still applies over and over and over. You bring up human capital management and talent management as one way to do it. Think about the way most of us still hire people. We get a transcript and a cover letter from them which lists primarily their codifiable skills in addition to some nonsense that we call a person and a self-starter, which is completely unverifiable. And then we sit down and we have a short interview with them. The research is really clear. The point of that interview is for me to figure out if you remind me of me. And if so, you got the job. We don't need to do more research about this. This model is broken. It's still the dominant model. One of the encouraging things is we're looking around at what some of the more innovative, some of the leading firms are doing walking away from that model. And they're doing some really weird, different things. Some of the tech companies say, I actually don't care about your resume or your transcript. Where you went to school and your GPA has no value beyond more than a couple years after you graduate. I want to look at your GitHub score. Are you actually coding out there in the world? Are your contributions valued by other people? Eric is an advisor to this weird company called NAC and has you play video games, literally, has you play games to try to assess those unquantifiable, those more tacit things to see, are you going to be a good salesperson in this organization? So we're coming at this problem in lots of different ways. One of the ambitions that Eric and I have in the wake of this book is because we don't learn our lessons very well to go write another one and to try to surface the post-industrial business practices that make a ton of sense. We're going to draw heavily on Roberto's work and Marshall's work to try to incorporate measurement and platform dynamics and things like that. But I feel like the business world needs to a clearer view of what the second machine age means for them and human capital management is absolutely part of that. We have time for one more question. Was that the one sign or not? Okay. So I don't mean to put pressure on you. This needs to be an absolutely bang-up final question because it's the last one of the entire day. So please bring us home, make us proud. We talk about educating the students and the children of today to meet the demands of where technology is. My concern isn't about where technology is today. I'm looking at 10-year-olds, 11-year-olds coming through, and we need to be educating them where technology is going to be in 15 years' time, in 20 years' time. What needs to be changed? What policies need to be changed? How we educate the children, specifically what we're teaching them? What's your view on what we need to change in order for them to meet where the economy is going to be in 20 years? And I see we're just about out of time. Thank you, Alfred. This is a fantastic question. The reason I'm trying to dodge it is as you point out, it's such a hard question. Our 10-year-olds are going to be heading into a workforce in a decade. Take the kinds of things that we're seeing, project them forward a decade. Nobody knows where we're going. Anybody who tells you they know what the economy looks like in 10 years is lying to you or lying to themselves. It's just becoming too unpredictable. So what's the smart educational policy in that world? We need to make our best guess about where the human value still is. And I want to be clear. We need to be economic in a workforce sense. I don't mean it in a deeper sense, a more moral sense, although that's important, too. So if we just focus on what kinds of things will humans be doing in 10 years that will still be economically valuable, I still think there is a list that we can see with at least some clarity. We are still going to be engaged in creative work. We're still going to have that edge over technology. And Absolute Eureka, I think, is still a human skill. You say, look, that's for the geniuses, the Steve Jobs, the crazy outliers out there. I think that empathy and taking care of other people and making them feel good and getting them to comply with their medication 10 years out, 15 years out, that's still a human skill. I think negotiation is still a human skill. I think, as tired as we are of these words, management and leadership, when they're done well, are still very, very human activities. I think many people want a purely digital boss. I don't think anybody's going to want a digital soccer coach in 10 years to motivate them and make them better at any level. Now, I think all of those professions are going to have a lot more technology than they do now. But when I look ahead, all up and down, what we think of as the classic skills ladder or the educational ladder, even after a decade more of this crazy progress, there's still a lot of room for humans to add value. We just need to stop educating them and stop thinking about the situation like it's 50 years ago and push ourselves toward the future. I want to end up with one of my favorite, favorite quotes about how we should think about things, and it comes from a colleague at Harvard, Larry Lessig, and he's just got a beautiful way to summarize it. He says, with our policies, and I think more broadly, with the choices that we make as individuals and as people, we've got a very stark choice. We can protect the past from the future or we can protect the future from the past. I'm such a huge fan of the future. I want us to do all the work we can to protect the future from the past. I think that's a good note to end on. Thanks very much.