 The question is, what's going to replace our consumerism capital focused, capital appreciation focused system? And we're already seeing all these things around the edges. And this is why when I talk about the next economy, which is kind of this overall, I'm putting together this giant basket of factors. You know, like on the one hand, it's like the carrying economy. You know, we're starting to include that in the value boundary when rich companies are saying, well, we're going to give you three months. No, we're going to give you six months parental leave. And it's going to be for both parents. You know, that's basically valuing the carrying economy. You know, when we're starting to go, oh, my God, you know, pandemic unemployment insurance, that's the carrying economy, you know, universal basic income. I love Kaifu Lee's idea of no, we should we should tackle the carrying economy thing with a social investment stipend, you know, where we pay people to raise their kids. We pay people to look after their elders. We pay people to work in their communities, you know, like we have all these fresh opportunities to think about, you know, do we still want to do the same, you know, kind of economy? Or is it time for one of these generational shifts? And I think I in the back of my mind, I always have climate change as a giant driver of this, you know, like when you have hundreds of millions of climate refugees and all those pressures. Are we going to learn a lesson? Are we going to go? How do we turn this into an opportunity? You know, so when I think of great challenges of the 21st century, one of the ones I like to say is, how do we turn refugees into settlers? This is Rob Johnson, President of the Institute for New Economic Thinking. I'm here today with my friend Tim O'Reilly from O'Reilly Media. He's the founder and president. He runs a series of conferences. He's been to Inet conferences many times and stirred the drink of our awareness about the relationship between technology and society. When you look at tech, it's it's this incredible mirror. Look in the mirror. And that's really been, I guess, one of the biggest ideas that I've been just wrestling with for the last four or five years. You know, I talked about some in my book, but I've continued to think about it and look at it, which is, you know, when we see these algorithmic systems and we question, are they giving us what we want? You know, we have to recognize that we've built an economy, which is an algorithmic system that is a natural creation of a set of rules. It's very analogous to what Facebook built and that can go right in some ways and wrong in other ways that people can have really good ideas that that don't work out over time. So much to drill down on there. So yeah, a lot of side effects. So by the way, just for our listeners, your book is called What's the Future? Big yellow color WTF. And I remember you presenting that at the Inet conference in Edinburgh. And people were really delighted. Many, many went home, read it and sent me nice notes about it. And I hope I hope how to say people will start with that and follow your work, which is online. But the book is really a nice basis for understanding where you do work. Yeah. And from that point on, I've also spent I spent a lot of time starting to look at kind of the economics of these platforms and the economics of content on the web. And the whole idea of platform economics is a fascinating subject. It's very relevant to my own business because we run this online learning platform. And what we're trying to do is to balance the incentives for content creators to create with the interests of consumers. And, you know, we're very clear that we have to actually satisfy both sides of our market, partly because we're not a monopoly. You know, there are a lot of people providing this kind of content. And it really struck me that the reason why, you know, Google and Facebook and Amazon and Apple are able to act, you know, to say, well, all we care about it is the consumer and to squeeze their suppliers so hard. That's a real measure of the fact that they are a monopoly, because if you're not a monopoly, you have to care about your suppliers because they'll go somewhere else if you squeeze them too hard. And so I guess that sent me down a path of really trying to understand, you know, how do you build a balanced platform? We really try to do that. And it's actually funny because a lot of the most powerful innovations in our online learning platform came from us trying to find new ways to make money for our suppliers, not by trying to come up with new features for our users. And anyway, so I just started just, you know, studying this whole field of platform economics, understanding, you know, there's so much that's powerful and right there. There's this great quote from Paul Cohen, who used to be the DARPA program manager for AI and other advanced technologies, now a professor of computer science at the University of Pittsburgh. But he, he said, we were at the National Academy of Sciences, and he said something that I immediately wrote down. I've been quoting ever since. He said, the opportunity of AI is to help humans model and manage complex interacting systems. And there's a way that, you know, I look at these big platforms and actually quite frankly, financial markets as well. And they're the farthest along at using these AI and algorithmic systems to manage economies. You can actually, in an odd way, and I find this particularly fascinating with Google, because in its early years, and unfortunately, I don't think they've kept to this insight, in their early years, you know, first 15 years, say, they really had a very clear separation between the money market, which was their advertising, and the natural organic search market. And so fascinating because we have this idea in economics, I think, that in some ways, you know, price signaling, money is the coordinating function of the invisible hand. And what Google showed us in organic search was that you could actually build an incredibly powerful invisible hand centrally managed that did not use price at all as a factor. You know, they have hundreds of factors in modeling and manage a complex interacting system, looking at who links to what, what do people click on? You know, hundreds of factors and none of them was price. And so they proved to us that you can build a global scale, billions of users, marketplace of suppliers and consumers, a matching marketplace without money. And they had on the side the ads. Now, what's happened in the last 10 years is that they actually have have taken out, they've actually made search worse, i.e. adding money, they basically broken down the wall that they used to have between the two. And they and it's actually a lot like in some way, actually, Dina Srinivasan has a wonderful paper about Google Ad Markets, how they become, you know, they have kind of a monopoly on all the levels of the of the ad stack. Well, what I've been looking at is the other side, which is an as a result, they've had to produce more inventory. You know, in the same way that during the housing crisis, you know, they were producing more and more tronches of bad mortgages because they didn't have enough, right, for the demand that they created. And in a similar way, Google has basically said, oh, well, we're going to actually, you know, take, you know, get rid of those pesky organic search results. We can make more ad space and, you know, you'll find many times. Now there's this, you know, on a Google page for a financially valuable search. It's kind of interesting because non-commercial search looks a lot like Google did, you know, 10 years ago. You know, you'll have 10 links. You may have you'll have a few extra things. You know, here's a Wikipedia entry, you know, you know. But it's basically, you know, I always use my being a literary person, I'll do a search for, say, Anthony Trollop, who nobody reads anymore. You know, it looks like old Google, you know, but you look for anything that, you know, might be commercial, like a place to stay or a place to something to buy. And it's amazing how much is now Google content. You might get one organic result, and it's all ads and Google's own content. And Google's own content tends to be a Roach motel. You know, it's like you click on the link and it looks like you're going to Yellowstone National Park. But no, you're going to another Google page about Yellowstone National Park, you know, where they've taken more and more of the content of the web. And of course, this is the other side of what Dan has talked about. And I guess I, you know, that leads me back to, well, what is the master algorithm? You know, because one of the things that's fascinating, despite what Paul said about modeling and managed complex interacting systems, you know, algorithmic systems typically have a master objective function. You know, this thing, and actually could be even for each sub-factor, because again, a lot of these things are additive, you know, you kind of are, but you have an optimization function. It's actually offering a lot, it's called a loss function. You try to minimize some value, but in other cases, it really is the maximization of something, even if the function itself is a loss function. And, you know, the master algorithm of our society is grow your corporate profits. And, you know, so at some point, this idea that Google had of, you know, don't be evil, you know, when Larry and Sergey wrote their original search paper and they said, you know, an advertising based search engine will always be biased against its users. You know, this appendix called advertising and mixed motives. And they really, for the first 10 or 15 years, they figured out how to do that with paper click advertising off to the side. But at some point, you know, this and this again, this again, I'm rambling through my my half witted absorption of economics concepts here. I like going on a tour with you, so keep it up. Yeah, I've become fascinated by the concept of rents. And I've started working with Mariana Mazzucato and Josh Ryan Collins at UCL on a project funded by a media art to actually go a little deeper on this work on, you know, tracking changes in Google and, you know, Amazon's sort of content ecosystem and how that is really a former rent extraction. But in the course of that, I read Josh's little book called Why You Can't, Why You Can't Own Your I think it's called Why You Can't Own Your Own Home. And it's basically about land rents. And it kind of helped me realize that there's two, you know, there's two kinds of rents, because I was always, you know, my naive idea of economic rents is the is the sort of the guy controlling the bandits controlling the past or the, you know, the few landlord who's like, you know, dude, give me your grain or I'm going to, you know, you know, I'm not going to protect you or I'm going to, you know, or, you know, mafia or whatever that is that kind of extractive rent. But Josh really talks about how that also goes. There's a fundamental part of land rent, which is the rent that is caused by growth. You know, more people move into your town. You know, there are improvements to your town that you didn't have anything to do with. And you get this free lift. And what struck me out is that in a lot of ways, what we're trying to measure is with these companies, you know, like if you look at an Amazon or Google, any really innovative company, they have that first type of free rent of the growth of the market, which carries them along up to a certain point. And then because the master algorithm says, you have to keep growing. They go, what do we do? You know, so in some sense, actually, it's a new thing. I haven't actually, you know, kind of done like, when did the number of searches stop growing? You know, when did the, you know, and is that the point at which they started using the extractive form of rents to make up the difference? We get the economy we ask for. You know, we didn't know that we were asking for it. But once we see it, we have an obligation to change it. And I think that the, you know, this sort of the notion that you can see so clearly in companies like Facebook and Google of, you know, they have an optimization function, you know, in the case of Google in their in their best years, it was, and Larry actually said this in an interview that they actually attached to the IPO documents in 2004. It's a Playboy interview, actually. He he said. You know, he said, you know, our goal is to have users come to Google, find what they want and go away. You know, and if you look at Google today, you would never think that that was the case, right? So they had this idea, this optimization idea that we will give people what they want and go away. And in fact, one of their, you know, factors, you know, search factors that I love the most in some ways called the long click. A lot of people, you know, people who are in the industry know about this, but others may not, you know, they realize that if, okay, when they were in the days of the 10 blue links, people would sometimes click on the first, people would typically start with the first link. And if they came right back and then clicked on the second link, then they came right back and they clicked on the third link. And then they went away. They would be, oh, that's a statistical signal. When enough people do it, that maybe the third link is the one that's actually the best link. Despite all the other factors saying otherwise, they call, and they call that the long click when somebody clicks and goes away. And so here was Google with this very clear, wonderful, very effective, you know, master objective, which is to make people, to have people find what they want and go away. And, you know, in the case of Facebook, you know, they had this idea that they would show people things that they would want to spend more time with their friends, you know, and again, seem pretty wonderful. And, and then, you know, we end up with, you know, me and Margeniside. We end up with, you know, the various electoral politics in the US. You know, it's a combination of bad actors and misaligned incentives. And, you know, and in each case, the companies are doing a lot to try to change, but I think they're doing a lot to try to change without threatening what's their real master objective, which is, you know, we have to keep growing. And I think in a similar way, if you look at our tax system and our politics, you know, there's this massive avoidance of the fundamental realization that we are in charge of the economy that we, and we, so we do these little tweaks, what we're going to basically, we're going to do everything we can to reduce inequality, except, except do the things that would really do it, because there's too many people who have too much at stake. Yeah, that's an interesting question too. I look, I was going to say it's a very interesting question because when the scale of what I call natural monopoly, like Paul Romer talks about is so large, and yet we value individual freedom. We're defending the right of those technologists to operate on a very large scale without us intervening to change what they do for the collective good, right? It's really a tension between protection of individual liberty, whether it be commercial or social or what have you, on the one hand, and protection of other people from intrusion upon their lives by these products, and I have a good friend, Rohinton Madora, he runs the Center for International Governance Innovation near Toronto in Waterloo, Canada, and he talks about why don't we have a food and drug administration where these things get a trial, then they are explored for their social ramifications, then they are evolved, then they are released, and we're not very good at incorporating the public dimensions of the innovation for the social good once it takes off on a path that's quite profitable. Yeah, no, I hear that. I do think, you know, being a Silicon Valley type, I don't actually like that approach because, you know, prior restraint of innovation is always a bad idea. Yes, I mean, or almost always a bad idea. And competitors can engage in stopping you from developing something that would erode their profit margin too. So there's lots of complexities in this. Yeah, the thing that I think where I would start with more than anything else is a modified set of disclosures. You know, like if you think about, say, stock compensation, you know, for a long time, it was just sort of buried, you know, and then they basically said, no, no, you actually have to, you know, have your, you know, gap a non-gap, you know, measure the profitability. You can't hide this anymore. You know, they could have just said you can't hide it at all, you know, instead of you can have two measures and let people pick which one they're going to talk about. But they, you know, they had disclosure. And I think, for example, in the case of, you know, say Google and Amazon, I think, just in general, I, you know, I love the phrase, I'm not sure who said it originally, but Al Roth uses the title of his book, Who Gets What and Why? You know, like if you basically said, okay, we are going to work very hard to have a set of disclosures about who gets what and why. So there's really a transparent, you know, and, you know, like a real revolution in the, in the kinds of documents. You know, I think this is kind of going to a totally different area, but, you know, the work that Saul Griffith and crew did to map the energy economy with a Sankey diagram. What if you had a Sankey diagram of the money flows within Google and Amazon? You know, it would give you a whole other perspective on, you know, what's broken and what's not and how it's changing over time, because right now we don't really know, you know, I look at, I look at, for example, you know, part of the work that I'm doing now is to say, try to figure out, well, if we look at how of Amazon's, you know, fees to their suppliers changed over time, how does the introduction of advertising, which is basically a fee on their suppliers, it's not, you know, it's like calling it advertising, you know, it's really a placement fee, you know, they call it an advertising business, but it's a placement fee. And again, it's just very much like what we're talking about with Google and in the glory days of Amazon, it was totally customer focused, you know, and Jeff said we want to be the most customer focused, you know, company on earth, it's hard to understand how that can still be the case when you do a search. And all that you get are, are promoted products, you know, do they are saying, oh yeah, we're, you know, maybe if we dug in, we'd say, they would say, oh yeah, well, we have a system for evaluating, you know, in which the ability to willingness to pay is only one of many factors, and we've tested and we made sure that it's balanced out by the other factors. And I would believe that that's possible. But they should be, you know, they should be thinking about that. Is there a fundamental conflict between their advertising business and their idea of giving the best product to the customers? I'm interested in things like this film that came out about, I guess, several months ago, the social dilemma. And they talked about how they can watch the things you like the things you click on, and then send you a subset of the stimulants to excite you to affirm you to keep you on longer, and that the side effect, or I guess the intent of that is once they've shown advertisers that they get this long attention and recurring returns and everything, they can then raise their advertising rate. Yeah. Again, I think that we need a lot more study of this. I don't know if you've seen Tim Wang's book, The Subprime Attention Crisis, where he makes the point that actually some of this hyper targeted advertising is not actually as effective as promised. And that in some sense, it's a little bit like, you know, the subprime crisis, you know, where you're basically making a kind of a bogus product that doesn't actually hold up to scrutiny. And eventually the bubble will pop. I don't know if he's right about that. But in any event, the thing that I find super interesting, again, coming back to my my thoughts about the master algorithm, I found it fascinating that in 2014, Facebook got taken out to the woodshed for the study where they were researching whether they could make people happier or set or whatever by what they showed them in their newsfeed. And that was considered a breach of research ethics. But studying whether you can get them to spend more time on your site so that you can make more money, which is the research they do every day, doesn't even consider research. That's just doing business. That's marvelous. Yeah. How can it be a breach of research ethics to experiment whether you're influencing people's moods when your business is influencing people's moods? And it's okay as long as it's for profit. That says that's our society in a nutshell. And I think that our, you know, our media that descended on, you know, Facebook like, you know, wolves on a, you know, crippled deer are, you know, it's just like, you know, we have a lot to answer for there because they should have been doing more of that kind of experimentation. Because if that had become, you know, like a legitimate subject of discussion, wow, we really are influencing how people feel. What do we as a society want to do about that? That would have been a really damn good question. And instead, we made them stop. Yeah, they might have discovered that they were doing things that were harmful to mental health and then probably had the American Psychiatric Association take out an ad so that they wouldn't publish that. So that they would generate more business. I mean, all these interactions just I'm goofing. But this is a lot of, this is a lot of complex stuff that really does matter to life. Yeah, it is. It matters of course of the wonderful book Fishing for Fools by George Ekeloff and Bob Schiller, which, you know, this is an efficient market for for everything, including fraud. You know, there's an efficient market for manipulation, you know, that's so much of what we take for granted as the way our economy works is that efficient market for manipulation, not for people's benefit. The original ideal of the market and that it pulled everything together and created the balance and the price was a reflection of desire and cost, etc. was then a little bit superseded by people like Frederick Von Hayek who said, No, what what the market is is the information aggregator and through the prices, you get the signals for social response to do the right thing. Well, what you're telling me, I believe, is that now the technology is such that it's essentially zero marginal costs. The information aggregator is pulling it all together and you can see the outcomes without all of the iterative process and that that almost like it's almost like a planning market to use the old communist can be run at the central office because the computer can reach so many places so inexpensively. You know, it's interesting. I've actually thought a little bit about that. There's actually a novel called Red Plenty about the you know, the Soviet Central Planners and it was a fabulous essay about that, a book review by a computer scientist about why even today it's still very very hard. But when I read it, I thought, oh, I think we're thinking wrong about that and the way we're thinking wrong about it is, yes, these companies are central planners, but in the old model of central planning, you were planning supply and today's central planners are planning demand. So they're manipulating demand and then letting the supply rise to meet it as opposed to in the Soviet model or whatever it was, you know, planned supply and you were always wrong, but they don't plan supply. They just plan demand. And the Soviet model, they told you what you needed in the Soviet model and provided the supply whereas they determine your preferences of what you would get, what you would eat, etc. Now they're detecting what people want, influencing what people want, and then the suppliers can react to that information. That's right. And it's the combination of detection and influence that we again, we need to understand. So anyway, all this again comes back to this idea that if I had a magic wand, you know, about how to deal with the antitrust problem, it would actually not be first to say, oh, we'll break this up, we'll do that. It would be to vastly increase the amount of disclosure. Because I don't think we understand these systems enough yet. And you said earlier that you're able to do this at zero marginal costs. And I don't think that's actually correct. The cost is quite high. You know, if it were zero, I mean, it's high enough that you have only a relatively small number of players who are able to do it. And yes, you know, certainly it's got a low marginal cost, because once you built the infrastructure, you can do it for things that, for so many things that you would never have measured before, would never have influenced before, would never have. But it's not quite that idea of. Once you play at scale, your fixed costs are prorated over large volume. Oh, absolutely. But there's something that's a little wrong to me. And again, I am totally naive. I feel like I'm totally, you know, like, you know, I learned no economics before the last five or six years, and I'm just kind of, you know, kind of floundering around it. But it does seem to me that the idea from manufacturing of marginal costs going down is not the same in the digital realm. And when we say there's zero marginal cost, you know, a lot of it has to do with it. I remember back in the early days of Google, when I was first formulating, this would have been in the, you know, the early part of this century, you know, maybe even before Google had gone public. And I was thinking about what was different about these online platforms from, you know, the previous generation of software, you know, like Microsoft would have the gold master of Windows, and it would come out every two years or whatever. And Google, you know, and I was sort of working on this idea that software is becoming a process, you know, something that you do every day. And I made that comment, you know, to somebody who was a senior executive at Google, I said, you know, if you guys didn't keep working on Google, it would, it would stop working within a couple of days. She says, Oh, no, no, you're wrong. It would be a couple of minutes. Yeah. Okay. You know, and so there's that whole point that, you know, like that idea of marginal cost is sort of goes like you make it and you're done. And the point is, if Google or Amazon or whatever didn't keep doing what they're doing, you know, there is an incremental cost for which, which is ongoing. You know, it's not like you have a fixed cost that you're once done with. And in fact, because the world is changing so fast, you have to actually keep doing more and more. And you look at that with, say, Facebook, you know, it's like billions of people are pushing and pulling. And, and there's actually quite a high ongoing cost, which is driven by the changes that are extrinsic to the system. You know, unlike, say, that, you know, the decreasing marginal costs of a manufacturing business. And again, I don't think I haven't thought about that much before, but I'm quite convinced that somebody needs to basically write up an economics paper about why the economics of these businesses doesn't fit the old model. Yeah, there may be marginal costs that are incremental and increasing returns because the fixed cost of the platform is prorated over larger and larger volume. That's, that's right. I may have confused the two notions when we talk. Let me ask you. Yeah, exactly. Yeah. I'm really interested in you're exploring these technologies and their social ramifications and their business models. But there's one other overlay here. We live in a world now, which where different regions have very different philosophical systems. The Chinese Confucian Dallas world is very different than the Cartesian Enlightenment world. We see the difference between the United States in the one hand with lots of suppliers, Europe, using these services, but not having as many of the big monopolies that originate there. Asia with a centralized government structure, people fearing the centralized control. How do we build a global system where these kinds of platforms interface across nations and across philosophical boundaries? That's a really, really interesting question. And I guess I would say that that is one of the challenges that we get to face in the 21st century. Yeah. And I would would say that you know, that in fact is going to be a lot of the history of the 21st century. People are going to be pushing for advantage and trying to push their system over another. And I guess is that really that different from history in the past? Probably not. No, you know, in the sense that it's different manifestation with this technology system, different kinds of challenges in the micro sense, but the struggle is every person. If you think about colonialism, for example, this was, you know, somebody had a better, you know, now when I say better, it's in quotes because it wasn't necessarily, you know, more, but it was it was more powerful. They had a more powerful methodology. And we may well see, for example, that let's just say, let's imagine that China has a more powerful methodology, and they will outperform and and, you know, and the rest of the world will be forced to adapt. Hey, that's what happened with American capitalism. And so, you know, so I guess I would say that figuring this out really matters, but it isn't necessarily going to be, oh, we're going to figure out how to interoperate and everybody will play by the same rules. It may be that somebody wins and we all have to play by their rules. Now, of course, it's, you know, like, you know, just obviously the whole discussion about, you know, does do we get, you know, a digital central, a digital central bank currency that becomes a reserve currency, whoever controls that. Now, of course, there's been a great debate going on around, you know, this question of, you know, China's, you know, ideas about using a digital currency as a tool of social control and social tracking internally run against their goal of having, you know, a digital currency that becomes a world reserve currency. There's two things fight with each other because other people are, you know, other countries, you know, so you look at things like that, we're going to have a lot of different factors that influence the future. And I guess I would also say that I think that we're entering a really interesting world where the kind of challenges that we face are, you know, our old solution becomes a new problem. And, you know, again, I talk a little bit about this in my book, I was influenced by Mark Blythe, some things I read of his, you know, in my, you know, account of, in my book about, you know, how what we dealt with after World War Two, you know, which was, you know, you look at, okay, after World War One, you know, they were continuing to optimize for, you know, for capital and so on. And the winners bankrupted the losers. They had this terrible outcome of great depression, you know, hyperinflation in Germany leading to World War Two. After World War Two, they're like, dude, we're not doing that again. Right. And you have the Marshall plan, you have all the social insurance in the US, you have all those things that we did where we were optimizing for full employment. And then you get to, you know, the 70s and it's worked out to, you know, cost push inflation and, and, you know, oh my god, we got to change, you know, and of course, Blythe talks about Goodhart's law, you know, which is once you start optimizing for something, it stops working. And I think, you know, what we got to with our current version of shareholder capitalism has had also had a 40 year run. You know, it was like we did, we tamed inflation, it looked like it was working. And now it's not working. Hey, guys, good, good hard slog in, you know, and so the question is what's going to replace our, you know, consumerism, capital focused, capital appreciation focused system. And we're already seeing all these, you know, things around the edges. And this is why when I talk about the next economy, which is, you know, kind of this overall, you know, I'm putting together this giant basket of factors, you know, like on the one hand, it's like the carrying economy, you know, we're starting to include that in the value boundary when, you know, rich companies are saying, well, we're going to give you three months. No, we're going to give you six months parental leave. And it's going to be for both parents. That's basically valuing the carrying economy. You know, when we're starting to go, oh, my God, you know, pandemic unemployment insurance, that's the carrying economy, you know, you know, build, you know, and the question is, well, will it just be, you know, well, it's government's job to build the social safety net? Or is it company's job? Is it, you know, or do we start saying, oh, we just have to optimize differently? You know, I mean, all these discussions, you know, you know, universal basic income, I love Kai Fu Lee's idea of no, we should, we should tackle the carrying economy thing with a social investment stipend, you know, where we pay people to raise their kids, we pay people to look after their elders, we pay people to work in their communities, you know, like we have all these fresh opportunities to think about, you know, do we still want to do the same, you know, kind of economy? Or is it time for one of these generational shifts? And I think in the back of my mind, I always have climate change as a giant driver of this, you know, like when you have hundreds of millions of climate refugees, and all those pressures, are we going to learn a lesson? Are we going to go, how do we turn this into an opportunity? You know, so when I think of great challenges of the 21st century, one of the ones I like to say is, how do we turn refugees into settlers? You know, like, say we're treating them as a temporary problem. No, they're not going to be a temporary problem. We have to figure out, you know, and, you know, I've often thought, you know, that there's, you know, there are lessons from history. You know, I mean, unfortunately, they're bad lessons in some ways, you know, like you think about Israel. And, you know, if they, but, you know, the good side is all these people were not refugees, they were settlers. The bad side is they dispossessed the people who were there, you know, before. But like if you could you imagine a 21st century regime that said, dudes, you know, let's buy a lot, you know, there's all, you know, like you look at the great depopulation of vast parts of the United States. And you say, you know, what if we said, no, let's actually have, let's build new settlements there. Let's figure out how we would do that. You know, there's so many interesting challenges and how, you know, do we understand, you know, same thing, smart cities. I go, do we really need to build a smart city for tech people on, you know, on the outskirts of Toronto? No, Google, figure out how to make, you know, a refugee camp that's the new Hong Kong or the new Singapore, you know, that would be a freaking challenge. One of the things that is in the forefront of my mind right now is the challenges, the disruptions and the continent of Africa. Yeah, the one level. There are things like Jack Ma and the Luton Academy are working on in relation to creating new networks with the kind of technology you study to facilitate development, because what I will call the East Asian model, manufacturing, infant industry protection are being obliterated by global supply chains, machine learning and automation. Secondly, it's an equatorial region and climate change will affect subsistence farming and undermine social stability in an underdeveloped region. Finally, you've got according to the International Office of Migration, by the year 2070, absent a major war, the population of the continent of Africa will be 5.1 billion people. And so there's all kinds of things that you just took me through. There are all kinds of dimension, technology playing a positive role, neurology, meaning pastures of pro-law, climate change intruding massive potential migration. It's a fascinating laboratory for the kind of works that you explore. It is. And I have to say, you know, it's not going to be pretty. And that's kind of comes back. You mentioned in the beginning, we talked a little bit about social science food camp and I didn't mention one of my favorite sessions is from Ada Palmer, who is a science fiction writer and she's also a Renaissance historian. And actually, she's or she says she's a Renaissance historian who's also a science fiction writer. The two spring from the same deep sense. And she has a blog called X Herbet, which, you know, from the city in Latin, in which she writes these popular essays about how history happens. And they're fascinating because one of her, you know, themes is that we tend to always interpret history through our own lens. And she she she said, you know, people like to say history is written by the victors. That's wrong. Among historians, what we say is history is written by the people who write history and they have their own motivations. And she gives the example of one of her you know, peers as a grad student, written a thesis about a period in Mexico, when the span, maybe other parts Mexico and other parts of Central America, where there was a whole movement where the this, you know, the educated Spanish were rewriting Roman history to justify, you know, the way they were treating the you know, the Native Americans, you know, and you know, nothing, you know, and so you could see it there, you know, but could we see, well, well, David Gibbon was doing the same thing, you know, justifying the British Empire. And anyway, she goes back through many, many different examples. And actually, it's kind of funny because I'm just reading this is not history, but it's making very much the same point. Emily Wilson's translation from a couple of years ago of the Odyssey. And she just talks about the fashions in translation and the values that are expressed, you know, it's like, and both Ada and Emily make the same point, which was the people in the past were not like us. And, you know, so, you know, one of Ada, she has a great series of essays on, you know, our myth of the Renaissance, you know, and like, you know, all these people who were proto-moderns, you know, it's like they were atheists. It's like, no, nobody got burned at the stake for being an atheist. They were burned at the stake for believing some crap that we would not be able to believe that Joe Donner Bruno would be so stupid as to think that, you know, you know, it's just like just when you really go back, she says, look, I looked at all the people who burned at the stake, you know, and this whole, she kind of goes off on this Neil deGrasse Tyson thing about Joe Donner Bruno being burned at the stake for, you know, espousing Lucretius. And he's like, no, nobody got burned at the stake for, they felt Lucretius was so ridiculous because nobody could believe in atheism. It was a great foil, you know, they got, they, they got burned because they believed some variant of, you know, crazy things that now we go, why will you kill somebody over that? And I guess the reason I say this, you know, the same thing with Emily Wilson's Odyssey, you know, just like, you know, here's Pope turning, you know, this interpretation of, of Odysseus and, you know, so on. But I guess I think that we have to have kind of a humility as we think about the 21st century. And we, and it's not going to look like us. And we have to look back and go, Oh, yeah, the way we are today is not how we were 50 years ago. I mean, there is a continuity. But there's also massive discontinuity. And, you know, so for example, you know, that post World War II economy that I was talking about earlier, was a different economy where we valued labor, you know, and the idea that somehow the market doesn't value labor anymore. No, no, no, we told the market not to value labor anymore. I know there's a conversation I had with Brad DeLong once where he said, Look, I have a toy model. I haven't really developed it, a toy model where you can actually get the same output with, you know, more technology and less labor or, you know, less technology and more labor, you know, but you can get the equivalent results. You know, so we made a choice, you know, and we made that choice because it was the advantage of some people. And this idea keeps cropping up. I'm thinking about Ibrahim Kendi's book, How to be an Anti-Racist. And he comes to this wonderful conclusion, which is very similar. He says, you know, everybody has this idea that being an anti-racist is having certain values and feelings and eradicating, you know, values and feelings. He says, No, racism is not a set of feelings. Racism is a set of policies which are put in place by people in power and that are sold by influencing and creating those feelings. You know, and I think that that is a really powerful notion.