 Well Mark, if your entrance music were to be Beethoven, which symphony and why? For those of you who do care and know who Tyler Cohen is, this is how his podcasts start. This is how he intimidates his guests into submission. First of all, I just wanted to say thank everybody for being here with us today. We're really grateful that you were all able to spend time with us and hopefully it's been useful. Second is I'm going to get new business cards printed up that say you either know who I am or you don't care. Or both. Or both. I mean how can you possibly, let's see, I mean I guess we have to rule out Beethoven's ninth symphony because that's the official music of the European Union? Is that right? That's correct. That's the official anthem of the European Union. That's right. Which is just such a terrible mean thing for them to do to such a great piece of music like that. We should lodge a formal diplomatic protest. I guess probably Beethoven's fifth in retaliation. I would pig you as the fifth. Now how will AI make our world different five years from now? What's the most surprising way in which it will be different? So there's a great kind of breakdown on adoption of new technology that the science fiction author Douglas Adams wrote about years ago. He says any new technology is received differently by three different groups of people. If you're below the age of 15 it's just the way things have always been. If you're between the ages of 15 and 35 it's really cool when you might be able to get a job doing it. If you're above the age of 35 it's unholy and against the order of society and will destroy everything. AI I think so far is living up to that framework. What I would like to tell you is AI is going to be completely transformative for education. I believe that it will. Having said that I did recently roll out Chad GPT to my eight-year-old. I was very proud of myself because I was like wow this is just going to be such a great educational resource for him. I felt like Prometheus bringing fire down from the mountain to my child. And I installed it on his laptop and said son this is the thing that you can talk to anytime and it will answer any question you have. And he said yeah. And I said well no this is like a big deal. It answers questions. He's like well what else would you use a computer for? And I was like oh god I'm getting old. So I actually think there's like a pretty good prospect that like kids are just going to like pick this up and run with it and I actually think that's already happening right? Chad GPT is fully out and barred and banging all these other things. So I think kids are going to grow up with basically you can use various terms assistant friend, coach, mentor, tutor. But kids are going to grow up in sort of this amazing kind of back and forth relationship with AI. And anytime a kid is interested in something if there's not a teacher who can help with something or if they don't have a friend who's interested in the same thing they'll be able to explore all kinds of ideas. And so I think it will be great for that. I think it's obviously going to be totally transformative and feels like warfare and you already see that. The concern quite honestly I actually wrote an essay a while ago on sort of why I won't destroy all the jobs and the sort of short version of it is because it's illegal to do that because so many jobs in the modern economy require licensing and are regulated. And so I think the concern would be that there's just so much sort of glue in the system now that prevents change and it'll be very easy to sort of not have AI healthcare or AI education or whatever because literally some combination of like doctor licensing, teacher unions and so forth will basically outlaw it. And so I think that's the risk. If we think of AI and its impact in sociological terms, large language models, who will gain in status and who will decline in status and how should this affect how we think about policy? Yeah, so first of all it's important to qualify sort of exactly what's going on with large language models, which is super interesting and kind of this thing has happened that you kind of read about a lot in the press which is kind of there was this general idea that there would be something called AI at some point and then large language models appeared and everybody said, aha, that's AI just like we thought it would be and then that sort of everybody sort of extrapolates out and that's true to a certain extent. But large language models are the success of large language models is very unexpected in the field and actually the origin story of even chat GPT is this is not what Open AI actually started to do, they started to do something different and then there's actually one guy who actually his name is I think Alec Redford and he literally was like off in the corner at Open AI like working on this in like, you know, 2018, 2019 and then it just basically was this revolution building on work that had been done at Google. So it was kind of this very surprising thing and then it's important to sort of qualify like how it works because it's not just like some sort of robot brain. You know, what it is is it's basically you basically you basically feed essentially ideally all known human generated information into a machine and then you let it basically build a giant matrix of numbers and basically correlate everything that you know in a nutshell that that's that's what these things are. And then basically what happens is when you when you ask it a question or if you ask it to like, you know, make a drawing or something, it basically traverses it's essentially does a search it does a search across, you know, basically all all of these words and sentences and diagrams and books and photos and everything that that human beings have created and it sort of tries to find the optimal kind of path through that and that and that's how it sort of generates the answer that it gives you. And so it's just it's philosophically it's just kind of this really profound thing I think which is it's like it's like basically staring it's like you as an individual using this machine to kind of stare at the entirety of the creation of all human knowledge and then sort of have it played back at you. And so it sort of it sort of harnesses the the the creativity of, you know, thousands of years of human authors and artists and then and then sort of, you know, derives, you know, new kinds of answers or new kinds of images or whatever, but fundamentally you're sort of in interaction with our with our civilization in a very profound way. In terms of who gains and who loses status, there's actually a very interesting thing happening in the research right now. There's a very interesting research question for the impact on on on job skill, you know, for example, for people, you know, who work with words or work with work with images and are starting to use these technologies in the workforce. And sort of the question is, who benefits more? The high skilled worker and, you know, think, you know, lawyer, doctor, you know, accountant, whatever, graphic designer, the high skilled person who uses these tools to become, right, an additional quantum leap high skilled, and that would be a theory of sort of separation. But the other scenario is the sort of average or even low skilled worker who gets upgraded. And of course, just that, you know, kind of the nature of the economy there, you know, there are kind of more people, you know, kind of more people in the middle. And at least the research, there's been a series of research studies that have been coming back that it's actually the uplift average is actually more significant than the uplift to the high skilled level. And so actually what seems to be happening right now is it's actually a compression by kind of lifting people up. And so I wouldn't, you know, social questions are often a zero sum game of who gains and who loses. But there may be something here where just a lot of people just get better at what they do. Why is open source AI in particular important for international security? Yeah, so for a whole bunch of reasons. So one is it is really hard to do security without open source. And so there used to be, there actually used to be, there's actually two schools of thought on kind of information security, computer security broadly that have played out over the last 50 years. There was one school of security that says you want to basically hide the source code. And you want to hide the source code precisely. And this seems intuitive because presumably you want to hide the source code so that, you know, bad guys can't find the flaws in it, right? And presumably that would be the safe way to do things. And then over the course of the last 30 or 40 years, basically what's evolved is the realization, you know, in the field, and I think very broadly, that actually that's a mistake. In the software field, we call that security through obscurity, right? And sort of we hide the code, people can't can't exploit it. The problem, of course, is OK, but that means the flaws are still in there, right? And so if anybody actually gets to the code, they just basically have a complete index of all the problems, and there's a whole bunch of ways for people to get to code, they hack in. You know, it's actually very easy to, it's actually very easy to steal software code from a company. You hire the janitorial staff to stick a USB stick into a machine at three in the morning. So like, you know, software companies are like very easily penetrated. And so it turned out security through obscurity was a very bad way to do it. The much more secure way to do it is actually open source, basically put the code in public, and then basically build the code in such a way that when it runs, it doesn't matter whether somebody has access to the code. It's still fully secure. And then you just have a lot more eyes on the code to discover the problems. And so in general, open source has turned out to be much more secure. And so I would start there if we want secure systems. I think this is what we have to do. What's the biggest adjustment problem governments will face as AI progresses? For instance, if drug discovery goes up by three acts, all of a sudden the FDA is overloaded. If regulatory comments are open, AI can write great regulatory comments. What does government have to do to get by in this new world? Yeah, so I think for every scenario, and by the way, hopefully at least the first of those two scenarios happens, maybe also the second. For anything like this, what there should be is there should be a corresponding phenomenon happening on the other side. And so the government, sort of correspondingly then, should be using AI to evaluate new drugs. And so a company shows up with a new drug design. There should be AI assist to the FDA to help them evaluate new drugs. A regulatory agency that has public comments should have AI assist for being able to be able to process all that information and be able to aggregate it and then be able to reply back to everybody. And this is kind of true of basically every possible threat. This is a very interesting thing about AI. Sort of every possible threat you can think of AI posing, basically there is a corresponding defense that has to get built. I'll pick another one, cybersecurity. People are quite, I think, legitimately concerned that AI is going to make it easier to actually create and launch cybersecurity attacks. But correspondingly, there should be better defenses. There should be AI based cybersecurity defenses. By the way, we see the exact same thing with drones. Weaponized AI autonomous drones are clearly a threat, as we see it in the world today. So we need AI defenses against drones. The cynical view would be this is just a classic arms race, attack defense, attack defense, and does the world get any better if there's just more threats and more defenses. I think the positive way of looking at it is we probably need these defenses anyway. So even if we didn't have AI drug discovery, I think we should be using AI to evaluate drugs. Even if we didn't have AI drones, we should still have defense against standard missiles and against enemy aircraft. Even if we didn't have AI sort of driven cyber attacks, we should have AI driven cyber defenses. So I think this is an opportunity for the defenders to not only keep up, but also build better systems for the present day threat landscape also. The Biden AI directive, what's the best thing about it? What's the worst thing about it? It didn't overtly attempt to kill AI. So that was good. You never know with these things what they're going to, how much teeth they're going to try to put into it. And then of course, there's always the question of whether that it stands up in court. But it wasn't, there were things that were being discussed in the process that were much, much worse. And I think much more hostile to the technology than ended up being in it. So I think that's good news. I think it was quite benign in terms of it's, it's just like flat out directives, which is good. You know, the issue with it, my, you know, people have different opinions. My opinion, the issue of it is it kind of green lit, you know, essentially 15 different regulatory agencies to basically put AI under their purview in sort of undefined ways. And so, you know, we will now have, I think a relatively protracted process of many regulators from many agencies without explicit authority in the domain, basically inserting themselves into the space. And then, you know, presumably at some point, there will be a determination of who, you know, has purview over what, but it seems like we're in for a period of quite a bit of confusion as a result. So how much more green energy do we need to in essence fuel all of this AI? And where will it come from? What do you see the prospectors like for the next 20 years? Yeah, so the good news with AI, the good news of AI and the good news with also, by the way, with crypto, because there's always a lot of controversy around crypto and Web 3 and blockchain around energy use. The good thing with these technologies, the good news from energy is these systems lend themselves to centralization of data centers, right? And so, if we need a, you know, if we need a, you know, a million to go into 10 million, go into 100 million to a billion AI chips, you know, they could be distributed out all over the place, but they can also be highly centralized. And because you can highly centralized them, you can think, not just in terms of building a server, you can think about building basically a data center that's an integrated thing from the chip, basically all the way to the building or to the complex of buildings. And then the way those modern data centers are built by the leading edge companies now is they're sort of built on day one with an integrated strategy for energy and for cooling. And so basically any form of energy that you have, you know, that you could do in a very efficient way, in a very clean way, you know, or new energy technologies, you know, this AI is a use case for developing and deploying that kind of power. And so, you know, just building on what we've seen from internet data centers, that could be geothermal, that could be hydroelectric, that could be nuclear fission, that could be nuclear fusion, solar, you know, wind, big battery packs and so forth. And so the, I think the aspirational hope would be this is sort of another catalyst to more advanced rollout of energy. And even if there's sort of net energy increase, the sort of motivation to get to higher levels of efficiency will be net good in helping us get to a better energy footprint. And which of those energy sources in your view is most underrated? Oh, I mean, nuclear fission for sure is the most underrated today. And so, you know, there ought to be, there ought to, yes, the Wave of Magic one. We ought to be doing what Richard Nixon proposed in 1971, right? We ought to build what he called project independence, which was build a thousand new nuclear power plants in the U.S., and then cut the entire U.S. grid over to nuclear and electricity, go to all electric cars, do everything else. Richard Nixon's other, you know, great, great, corresponding, you know, creation of the Nuclear Regulatory Commission, of course guarantees that won't happen. The plan is exactly, exactly on track. But we could. And so either with existing nuclear, you know, nuclear fission technology, or there's actually a significant number, you know, now of new nuclear fission startups, as well as fusion startups working on new designs. And so this would certainly be a great use case for that. So if the nations that will do well in the future are strong in AI and strong in energy, thinking about this in terms of geopolitics, which countries rise in importance? For better or worse? Yeah, so, well, so, okay, so different things. Add a couple more things to that, which is, which companies are in a position to best invent these new technologies? And then there's a somewhat separate question of who's in the best position to deploy, because it doesn't help you that much to invent it if you can't deploy it. And so I would put that in there. But, I mean, look, I would give the US, I would give the US like very, very high marks on the invention side. I think we're, you know, we, I think we're the best. I think we have the best R&D innovation capability in the world in most fields, not all, but most. And I think that's certainly true of AI. And I think that's at least potentially true in energy. I don't know whether it actually is, but it could be. And so, you know, we should be able to forge ahead on that. You know, China is clearly the other country with critical mass in all of this. And, you know, you could quibble about the level of invention versus sort of fast follow and talk about, you know, kind of IP acquisition, things like that. But nevertheless, whatever your view is, they're moving very quickly and aggressively and have critical mass. You know, big internal domestic market and a huge number of researchers and a lot of state support. So I think by and large, you know, we're looking, for sure on AI, and then I think probably also in energy, we're probably looking at primarily a bipolar world for quite a while, and then spheres of influence, you know, kind of going out. You know, I would say Europe is sort of a dark horse in a sort of a strange way in that the EU seems absolutely determined to ban everything. So they've sort of put a blanket ban on capitalism and within that ban AI and ban energy. On the other hand, you know, we have this incredible AI company called Mistrel in France, which is probably the leading open source AI company right now and one of the best AI companies in the world and the French government has actually really been stepping up to help, you know, the ecosystem in Europe. And so I would actually like to see sort of a tripolar world. I'd like to see the EU kind of fully punch in but I'm not sure how realistic that is. So let's say you're in charge of speeding up deployment in the United States. What is it you do? State level, local level, feds? What should we all be doing? Of AI specifically? Everything. Is it all increasingly interrelated, right? AI, energy, biomedicine, everything. Yes, yes. Well, and AI takes you straight to CHIPS, which takes you straight to the CHIPS Act, which does not yet result in the creation of any CHIPS plants, although it might someday, the most basic observation is maybe the most but all, which is, you know, stagnation is a choice, decline is a choice, you know, as Tyler's written at great length, you know, the US economy downshifted its rate of technological change, you know, basically since the 1960s. You know, technological change has measured by productivity growth in the economy was much faster prior to the last 50 years than the most recent 50 years. And, you know, that's just the result of just sort of a, you know, you have big argument is exactly what caused that, but a lot of it is just an imposition of just like, you know, blankets and blankets and blankets of regulation and restrictions and controls and processes and procedures and all the rest of it. So, yeah, so, you know, and then you could start by saying step one is do no harm. And so, you know, this is sort of, this is our approach on AI regulation, which is, you know, don't regulate the technology, don't regulate AI as a technology anymore than you regulated microchips or software or anything like operating systems or databases, instead regulate the use cases. And the use cases are generally regulated anyway. It's no more legal to field a new AI designed drug without FDA approval than it is a standard designed drug. And so, apply the existing regulations as opposed to hamstringing the technology. You know, so that's one, you know, energy exploitation. Again, the energy is just pure choice. Like, we could be building, you know, we could be building the, you know, the 1,000 nuclear plants tomorrow. My favorite idea there, which always gets me in trouble and so I can't resist is, so the Democratic administration should give Koch industries the contract to build 1,000 nuclear reactors, right? Everybody gets revenge and everybody else. The Democrats get Charles Koch to fix climate change and then Charles gets all the money for the contracts. So it's kind of a, everybody ends up happy. Nobody yet has bit on that idea when I pitched it, but maybe I'm not talking to the right people. So, you know, look, we could be doing that. You know, we'll see if we choose to. Look, the chips plant thing is gonna be fascinating to watch. There was this really, you know, we passed the Chips Act and in theory the funding is available and you know, the American chip companies are generally pretty aggressive and I think trying pretty hard to build new capacity in the U.S. But there was this actually very outstanding article in the New York Times some months back by Ezra Klein where he sort of goes through and he says, okay, even suppose the money's available to build chip plants, like is it actually possible to build chip plants in the U.S.? And he sort of talks about all of the different regulatory and legal, you know, basically requirements and obligations that get layered on top and, you know, it was sort of speculating as to whether any of these plants will actually get built. And so again, I think we have here, we have just sort of a level of fundamental choices in society, which is, you know, do we wanna, do we wanna build new things? I can't say how exciting it's been, at least on the West Coast, how exciting it's been for Las Vegas to get the sphere because like it's now impossible to visit Las Vegas without like, like everybody's always complaining the Egyptians built the pyramids, like where are our pyramids? And it's like, ah, we have a sphere. Like, and so just like flying into Vegas just gets your juices flowing, like gets you all fired up because like this thing is like amazing. And by the way, I'm just talking about the view from the outside. I understand that the thing on the inside is also amazing. So, you know, we clearly can do that, you know, at least in Vegas where Ben lives now, you know, in London, I think they just gave up on building the sphere. So that's, you know, that's the other side of it. And so, you know, we do have to decide whether we want these things to happen. You know, it's a little bit dispiriting to see the liquid natural gas decision that just came down. But are the roots of this stasis quite general and quite cultural? Because parents coddle their children much more, their higher rates of mental illness amongst the young. Young people, it seems, have less sex along a lot of cultural variables. The percent of old music people listen to compared to new music. There seems to be a more general stagnation. So how would you pinpoint our loss of self-confidence or dynamism? Where's that coming from? Yeah. Well, so first of all, to be clear, we're very much in favor of young people not dating because that's very distracting from their work at our startups. So that works out fine. And fortunately in our industry, we have a long experience without having dating life when we're young. So that works out well. So it's not all bad. So it is really interesting. I mean, the view is experience, I mean, look, Silicon Valley has all kinds of problems and we're kind of a case study for a lot of the, I mean, look, it's not like you can build anything in Silicon Valley, right? So we've got, I mean, our politicians absolutely hate us and they don't let us do anything if they can avoid it. So we have our issues. The view from the Valley is, yeah, a lot of kids are being brought up and trained to basically adopt a sort of fundamentally pessimistic, sort of pessimistic or sort of how to put it like, stagnation oriented, have very low expectations. Basically, a lot of what passes for education now is kind of teaching people how to complain, which they're very good at. The complaining has reached operatic levels lately. And so there is a lot of that. Having said that, look, I'm also like actually really optimistic and in particular I'm actually quite optimistic about the new generation coming up. I think Gen Z and then I think it's Gen Alpha and then it's whatever my eight-year-old is. We're seeing more and more kids that are coming up and they're being sort of exposed to a full load of basically programming, sort of cultural programming, education programming that says you should be depressed about everything. You should be upset about everything. You should have low ambitions. You shouldn't try to do these things. And they're coming out with sort of a very radical, kind of hard shove in the other direction and they're coming out with just basically, coming up with tremendous energy and tremendous enthusiasm to actually do things. And so, which is very natural, right? Because kids rebel. And so if the system is teaching stagnation, then at least some kids will come up the other way and decide they really wanna do things in the world. And so I think entrepreneurs in their 20s now are a lot better than certainly my generation. And they're frankly more aggressive than the generation that preceded them and they're more ambitious. And so, now, we're not dealing with them, we're dealing with a minority, not a majority, but I think there's quite a bit, like, yeah. Every hour I get where I can spend at 20 year olds is actually very encouraging. One emotional sense I get from your walk on music, Beethoven's Fifth Symphony, it's just that the stakes are remarkably high. Now, if we're looking for indicators to keep track of whether, in essence, things are going your way. Greater dynamism, freedom to build, willingness to build, American dynamism. What should we track? What should we look at? How do we know if things are going well? Yeah. I do not come here, I do not come to the world with comprehensive answers. I mean, so the overall answer is productivity growth in the economy is a great starting point. Economic growth is a great starting point. So the overall questions are there. Most of our economy is dominated by incumbent institutions that have no intention. I don't think of changing or evolving unless they're forced to. Certainly most of the business world now is one form of oligopoly or another that sort of has various markets locked up. So I don't think there's some magic bullet to hugely accelerate things. Having said that, I think attacking from the edges is the thing that can be done, which is basically what we do, what Silicon Valley does. And then when you attack from the edges the way that our entrepreneurs do, look, a lot of the times they don't succeed. It's a high risk occupation with a lot of risk of failure, but when they succeed, they can succeed spectacularly well. And a lot of our, I mean, we have companies, we have companies in the American economy that were venture backed in the 1970s, and actually even some that were venture backed in the 1990s and 2000s, that are now bigger than most national economies. And so when these, was it Apple? I think Apple's market cap is bigger than the entire market cap of the German stock market. I think that's right. Just one company. And Apple was a venture backed startup, two kids in a garage in 1976, not that long ago. It's bigger than the entire German basic industrial public market. And so attack of the edges, sometimes you can get really, really big results. Sometimes you just prod the system. Sometimes you just spark people into reacting and that pushes everything forward. And then the other question always is just like, what are the tools, from our standpoint, what are the tools that startups have in order to try to really change things? And there's a bunch of such tools that there's always two that really dominate. And one is just what's the magnitude of the technological change in the air that can be harnessed. And so we're always looking for kind of the next super cycle, the next breakthrough technology, in which you can imagine a thousand companies doing many different things, all kind of punching into incumbent markets. And AI certainly seems like one of those. And then yeah, the other is just the sheer sort of animalistic ambition, energy, animal spirits of the entrepreneurs and of the teams that get built. And like I said, I think those are even, I think those are, I think the best of the startups today are more aggressive, more ambitious, more capable. The people are better, they execute better than at least I've ever seen. So I think that's also quite positive. Who's a social thinker who helps you make sense of these trends? Oh yeah, my favorite is James Burnham, is my favorite. Who? And why Burnham? Why Burnham? Yeah, so Burnham is a famous, Burnham is not famous, but he should be famous. He was a, Burnham is a fascinating story. He's a thinker in the 20th century who talked a lot about these issues. He started out life as a lot of people do in the 1920s as a, in 30s as a dedicated Trotskyite, full on communist. But he's a very special guy. Burnham was a very brilliant guy. And so he was such a dedicated communist that he was close, personal friends of Leon Trotsky, which is how you really know, you really know how you've made it when you're a communist. And he would have these like huge arguments of Trotsky, which is not the safest thing in the world to do, but apparently he got away with it. And very enthusiastic communist revolutionary through the 30s. And then sort of in the 40s, he's a very smart guy and he started to figure out that was a bad path. And he went through this process of rethinking everything. And by the 1950s, he was so far to the right that he was actually a co-founder of National Review, magazine with William Buckley, who always said he was the intellectual, kind of leading light at a national review. And so he's kind of got, he's got kind of works that he wrote that will accommodate the full spectrum of politics. But in his middle period where he was trying to kind of figure out, this is like in the 1940s, he was trying to kind of figure out where things are going. And there was enormous questions in the 1940s because it was viewed as like a three-way, basically war for the future between communism, the far left, fascism on the far right, and then liberal democracy kind of floating around there somewhere. His best, most well-known book is called The Managerial Revolution, which talks a lot about the issues we've been discussing, and it was written in like 1941. And it's fascinating for many reasons, part of which is he was still mad about communism, so a lot of it is he debunks communism in it, but also he, you know, they didn't know who was gonna win World War II. And so it talks about this battle of ideologies as if it were still an open topic, which is super interesting. But he did this very kind of sort of Marxian analysis of capitalism. He made the observation that I see every day, which is there are fundamentally two types of capitalism. There's the original model of capitalism, which he calls bourgeois capitalism, which you could think like Henry Ford is the archetype of that. So A capitalist starts a company, runs the company, name on the door, ownership of the company, control of the company, you know, sort of dictator of the company, you know, sort of complete alignment of a company with an individual. And then he talks about this other emerging form of capitalism at that time called managerial capitalism. And in managerial capitalism, you think about today's modern public companies, right? Think about Walmart or whatever, and any public company where, you know, in theory there are shareholders, but really what there are is there are millions and millions of shareholders that are incredibly dispersed, you know, who own small, you know, everybody in this room owns some, you know, three shares of Walmart stock and a mutual fund somewhere. You don't wake up in the morning wondering what's happening to Walmart. It doesn't even occur to you to think about yourself as an owner. And so what you get instead is this managerial class of actually both investors like fund managers and then also executives and CEOs who actually run these companies. And they have sort of, they have control, but without, you know, without ultimate responsibility, right? Without ultimate, without ultimate ownership. And the interesting thing he said about that is he said, look, managerialism is basically, it's not that it's good or bad, it's just, it's sort of necessary because, you know, companies and institutions and governments and all the rest of it get to the point where there's just too big and too complicated for one person to run everything. And so you're gonna have the emergence of this managerial class is gonna run things. But there's a flip side of it, which is the people who are qualified to be managers of large organizations are not themselves the kind of people who become bourgeois capitalists. They're the other kind of person. And so they're often good at running things, but they generally don't do new things, right? They generally don't seek to disrupt or seek to create or seek to invent. And so one way of thinking about kind of what's happened in our system is capitalism used to be bourgeois capitalism, it got replaced by managerial capitalism without actually changing the name. That will necessarily lead to stagnation. And by the way, that may be necessary that that happens because the systems are too complicated, but that will necessarily lead to stagnation. And then what you need is basically the resumption of bourgeois capitalism to come back in and kind of at the very least like poke and prod everybody into action. And that, you know, aspirationally is what we do and what our startups do. Mark Andreessen, thank you very much. Good, great, thank you everybody.