 Mark Andreessen has helped a lot of people get rich, including Mark Andreessen. And he's made millions of people's lives more fun, more efficient, or just a little weirder. He's the co-creator of the first widely used web browser. He's the co-founder of the venture capital powerhouse Andreessen Horowitz. And though he hates the industry term unicorn for a private tech firm valued at more than a billion dollars, he's a famously successful unicorn wrangler. He was an early investor in Facebook, Pinterest, LinkedIn, Twitter, Lyft, and more. Andreessen is also aggressively quotable, whether it's his classic 2011 pronouncement that software is eating the world or his more recent, there are no bad ideas, only early ones. And in 2014, he said, in 20 years, we'll be talking about Bitcoin, the way we talk about the internet today. A born bull, Andreessen is an optimist who places his hope for the future squarely in the hands of the 19 year olds and the startups no one has heard of. As splashy AI, such as ChatGPT and Dolly begin to permeate our daily lives and the predictable panic rubs up, reason sat down with Andreessen to talk about what the future will look like, whether it's still going to emerge from Silicon Valley, about the role of government in fostering or destroying innovation, and what you should read on your next beach vacation. So let's start with AI. I tend to be skeptical on a pretty basic level of people who want to claim that this time it's different with any given tech or cultural trend. But I guess what I want to start with is, is this time it's different? Yeah, can I do the long answer? Absolutely. It's okay, good. So look, the long answer is like, you know, AI has been kind of the fundamental dream of computer science going all the way back to the 1940s, right? So Alan Turing, Alan Turing used to be this obscure historical figure and then they made the movie The Imitation Game, which I hear is a great movie, which has made him more famous. And, you know, he was sort of one of the, you know, there are a bunch of these guys, but there were a bunch of guys back then who sort of invented the computer as we know it today. You know, and it was during the heat, the heat of World War II, but even in the very beginning, you know, they just had this, you know, and as the, you know, human beings, I think, can't resist anthropomorphizing everything. And so he had this kind of, you know, feeling right, is sort of drive right up front saying, like, let's, let's build an electronic brain, right? Let's, let's, let's build artificial intelligence. And so he and John von Neumann and Claude Shannon and a lot of those guys in that era, you know, kind of were thinking about this. And this is literally when they're like wiring the first computers together, right? And the computers that they were working with in those days, they were running on vacuum tubes, right? And they literally would hand wire the computers together like they were hand soldering the connections, you know, the term computer bug actually comes from the fact that the problem that they had in those days was not a software bug, as we understand it today. The problem they would have is that an actual insect would fly into the wiring and would fry. And then that would actually short out the computer. And so that's why that's why computer problems are called bugs. So anyway, this has been a dream, you know, going all the way back to that era. And then basically the field of computer science has always had this AI, you know, kind of specialty in it basically for it's like 78 years now, right? And, you know, to your point, like repeatedly, there was like this dream that, you know, it was finally about to happen. There were depends how you count like five or six like AI booms, or people were really convinced that like this time is the time it's going to happen. And then there were what in the field are referred to as the AI winters, right? And which it turns out like oops, you know, not yet. But I mean, if you think about 78 years, like, there were AI researchers who literally were born went to college got their PhDs worked in the field their entire career and died kind of in that period without ever actually seeing the results of all of their work. And so it's one of these things where like, you know, it's like, you know, it's like one of these things looks like it's like, you know, Isaac Newton spent a lot of his time pursuing alchemy, right? And it's like, you know, maybe, you know, there are some things that these people kind of spend, you know, decades on or or even longer and they never work. And then there are some things that are, you know, 78 year, you know, overnight successes, like sometimes these things actually do work. And so, yeah, so, so, you know, this field, sorry, for sure, we're in another one of those AI booms, it's like AI boom number five or six or seven. For sure, there's this rush of enthusiasm. But there's a couple of things that are different about what's happening right now. And the big thing that's different is that there are these like very well defined tests, ways of sort of measuring sort of intelligence like capabilities, let's say. And computers have started to do actually better than people on these tests. And these are tests that involve kind of interactions with fuzzy reality. Right. So these aren't just tests of like, can you do math faster, these are tests of like, can you process reality in a superior way. And so the first of those test breakthroughs was in 2012, when computers became better than human beings at recognizing images in recognizing objects in images. Right. So you like throw up a whole bunch of images, a whole bunch of photographs, right. And it's like, is this, you know, the use the internet meme, is this a cat, right, or is this like a, you know, cinnamon bun, right, or is this, you know, something else. And computers are now actually better at doing that kind of object recognition images at scale that people are that that's the breakthrough that has made the self-driving car a real possibility, right. Because the, because what the self-driving cars is basically just processing large amounts of images and trying to understand, you know, is that a kid running across the street or is that a plastic bag, right. And should I hit the brakes or should I just keep going. And self-driving cars are starting to work like Tesla's, you know, full self-driving isn't perfect yet, but it's starting to work quite well. And then, you know, Waymo, you know, one of our companies, you know, they're up and running now in something that people can, people can experience. And so, so that's starting to happen. And then we started to see kind of these breakthroughs in natural, what's called natural language processing about five years ago, where computers started getting really good at understanding written English, they started actually getting very good at speech synthesis, which is actually quite a challenging problem itself. And then most recently, there's this huge breakthrough, you know, in, you know, in the products that sort of come to market called chat GPT, or, you know, more generally this phenomenon, it's chat GPT is a is an instance of a broader phenomenon in the field called large language models or LLMs. And, you know, people presumably have like, I don't know, a lot of people have now tried this. And so they've had hands on experience with this. And I'll just tell you, like a lot of people outside the tech industry are shocked by what that thing can do. And I'll just tell you, a lot of people inside the tech industry are shocked by what that thing can do. And so, you know, it feels like, yeah, I was actually going to ask that because chat GPT does feel to those of us who don't fundamentally understand what's going on, like a little bit of a party trick or a magic trick, right. And sometimes it's the sort of classic technology sufficiently advanced is indistinguishable from magic. And sometimes it really is a trick. But, you know, you're saying, no, no, this is this is something real, we can see something real underneath. Well, so it's also a trick. So it's both, right? So this is why it's such this is why your question is such an interesting question, right? This is this is actually a very kind of big profound, there's a big profound and an underlying question we're going to get to, which is what does it mean to be smart? Like what does it mean to be conscious? What does it mean to be human? Like, like, ultimately, all the big questions are not what does the machine do? Ultimately, all the big questions are what do we do? Right. And so the big underlying question under all this is like, what do we do? Like, how do how do we actually like form sentences? How do we actually make arguments? How do we actually like write screenplays and like write poetry and like do all the things we do? And like, how much of what we do is a trick? And so we'll come back to that in a second. But, but look, so let me let me describe what what LLMs, you know, what these things actually are, like like chat GPT. So what, so what they actually are is they're basically very fancy autocompletes. Right. And so autocomplete is like a standard computer function. If you have an iPhone, right, whatever it happens, you know, you start typing a word and it will offer you an autocompletion, right, of the rest of that word. So you don't have to type the whole word. And then, you know, Google Gmail has autocomplete now for sentences, where you start typing a sentence. Oh, I'm sorry, I can't make it to your event. And it will kind of, it'll suggest the rest of the sentence. And so basically what what what LLMs are is basically autocomplete across a paragraph or across five paragraphs, right, or by the way, maybe an autocomplete across 20 pages or in the future, by the way, maybe an autocomplete across an entire book. Right. Like, in other words, like, you should close your eyes and imagine a future version of this thing, right, you sit down to write your next book. This literally something is going to happen. You're gonna you'll sit down to write your next book. You'll type the first sentence and it will suggest the rest of the book. Right now, are you going to want what it suggested? Maybe, right, we'll come back to scenarios where that might be the case. Probably not, but it's going to give you a suggestion, right, and it's going to give you a suggestion, it's going to give you suggested chapters, it's going to give you suggested topics, it's going to be suggested examples, it's going to get you you know, suggested, you know, waste of word things, you know, you're going to, you can already do this with chat GPT, you can type in, it's like, look, here, you know, here's my draft, like, you know, here's, you know, here's five paragraphs I just wrote, you know, how could this be worded better? How could this be worded more simply? How could this be worded in a way that people who are younger can understand it, right? And so it's going to be able to autocomplete kind of in all of these kind of very interesting ways, and then it's up to the human being who's steering it to decide, you know, what to do with that. And so this is kind of my point, right, is that a trick or is that a breakthrough, right? The answer is like, yes, to both, like, yes, it's a trick, like, in other words, well, here's a critique, here's a critique of, here's a critique of LLMs. And so there's, there's this guy, Jan Lacoon, who's a legend in the field of AI who's at Metta, and he's been, he's been tweeting this publicly, he's, his Twitter account's gotten very actively. And he's like, look, he's like, he's argued, he argues this is more trick than breakthrough, and he argues basically it's like, look, this thing autocompletes, right, it's like a puppy, it like autocompletes the text it thinks you want to see, but it doesn't actually understand any of the things that it's saying, like it doesn't actually know who people are, it doesn't know how physics works, it doesn't know how math works, right, like it doesn't like, and it has this thing that's called hallucination in the field where like, if it doesn't have an autocomplete, it's like facially correct, it's like a puppy, it still wants to make you happy. And so it will autocomplete a hallucination, and it will start making up like mains and dates and historical events that never happened. You know, the thing that that always sounds like to me, I know the term is hallucination, the thing that it always connects with to me is imposter syndrome. I mean, it sounds like, you know, I don't know whether humans have the imposter syndrome or the AIs do, but we're all just saying the thing that we think someone wants to hear. So this goes to the underlying question, which is what do people do, right, and this is where things get like incredibly uncomfortable, like for, you know, for a lot of people to think about this, which is like, okay, like, like what is human consciousness, like how do we form ideas, like how do we decide, I mean, I don't know about you, what I've found in my life is that number one, a lot of people in a day-to-day basis are just telling you what they think you want to hear, right? And it's just, and for a lot of people, it's just much more comfortable. Maybe more for you than for me. Maybe, although, you know, look, a lot of this just, you know, walking down the street or working with, you know, working with anybody in a professional context or, you know, working, you know, work with the customers, you know, take, you know, buy it, buy it, buy a coat and then take it back next week and complain to the customer service rep, right? The customer service rep, you know, they're thinking, wow, you know, this person is a real, you know, whatever, like, but they're not saying that, like they're saying, oh, I'm so sorry, you know, miscustomer, you know, how can I help you? Right? So, so life is full of these, like, life is full of these autocompletes, kind of as, as is. And then look, like, you know, you see this in intellectual debate, like an intellectual debate, I mean, you guys, you know, live this life, like, how many people are making arguments that they actually have conceived of, right? And that they actually believe versus how many people are making arguments in any intellectual debate that are basically the arguments that they think people are expecting them to make, right? And, you know, this is where you see this thing in politics where basically this is weird thing in politics that you guys are kind of an exception to, is weird thing in politics where most people have the exact same sets of views as everybody else on their side on every conceivable issue, right? Right, isn't that a coincidence every time how that happens? Every single time, right? And it's like, well, it is, and, you know, what do we know? We know that those people have not sat down and thought through all of those issues from first principle. And we know that they haven't automatically arrived at all the same views. We know that's what's happened, of course, is a social reinforcement mechanism, right? So, people are saying what they think is necessary to fit into society. So, is that actually any better than the machine essentially trying to do the same thing? Like, I think it's kind of the same. Like, so I think what we're going to learn is that we're a lot more like JetGPT than we thought, right? Well, so there's another way to come at this, which is there's this thing called the Turing test, right? So Alan Turing, I mentioned, he's sort of early on, he created this thing called the Turing test. And the Turing test was basically his attempt to, basically he said, look, let's suppose we develop what we think is an AI. Let's suppose we develop a program and we think it's smart in the same way that a person is smart. And, you know, how will we know that it's actually smart? And so he proposed this thing called the Turing test. And in the Turing test, basically, you have a human subject, and then they're in a chat room, and they're in a chat room with a human being and with a computer. And both the human being and the computer they're in the chat room with are trying to convince them that they're actually the real person that the other one is the computer, right? And then basically, and then the question is basically, is the computer as good as or better than the person, right? The other person at convincing the subject that they're actually human being, right? And so the theory there basically is the test is if a computer can convince you that it's a human being, then it effectively is AI, like it's every bit as intelligent and sentient and conscious as a human. The obvious problem with the Turing test, right, is that people are really easy to trick, right? We're like super easy to con, right? And like life is filled with like, you know, everybody's life is filled with people trying to con them. Every time you watch a TV commercial, they're trying to, you know, con you every time you talk to a salesperson, you know, there's just this constant, you know, every time you're, you know, three card Monty, you know, you know, magician doing a car trick, politician on TV, right? They're always trying to con you in some way. And we know, you know, we know that human beings are like relatively susceptible to being conned. And so a computer that's good at conning you, right? Is that actually like, is that AI or is that just basically revealing an underlying weakness in actually what we think of as that which is profoundly human, which is like, we're really easy to trick. And if we're that easy to trick, it's like, okay, well, how smart are we? Right? And so I think you see what I'm saying, like it's there's no single there's no single vector of like smart versus non smart. It's more like, okay, there's certain sets of things that people can do better or worse. There's certain sets of things computers can do better for worse, you know, better or worse, the things computers can do that are better are getting really good. I mean, art, another way to think about this, if you try like mid journey or dolly or these new AI art generating things, but like, you know, these things now, like at their best, they're generating art that is, I think pretty clearly, let's say, you know, aesthetically superior is like a very deep thing. But let's just say more beautiful, like, they're able to produce art that is more beautiful than all but maybe a handful of human artists, like just objectively you look at it and you're like, wow, that's beautiful. And so, you know, two years ago, did we expect, did we expect a computer to making to be making beautiful art? No, we didn't. Can it do it now routinely? Yes. What does that mean in terms of what human artists do? Like if there's only a few human artists that can produce art that beautiful, maybe we're not that good at making art, maybe the, you know, you see what I'm saying. Yeah. So I think a lot of, you know, you've been sort of using the language of humanity, like humans are like this humans have these attributes. But some of this, at least is cultural. And the question that raises for me is one that I think is also kind of a political question. How much does it matter where these AIs emerge? Like, should we care if AIs are American or coming out of Silicon Valley versus coming from another cultural place? So I think we should. So among the things that we're talking about here are among the things we're talking about in the future of warfare, right? So, you know, what is an, you know, if you can see it in the self-driving car, like if you have a self-driving car, that means you can have a self-flying plane, right? That means you can have a self-guided submarine, right? That means you can have like, you know, smart drone, you can have, you know, you have this concept now we see in Ukraine with the so-called loitering munitions, right? Which is basically a suicide drone. It's a drone that's a suicide bomb, basically. Suicide being that it kills itself. But it just like stays in the sky until it sees a target and it just like zeroes in and, you know, drops a grenade or it itself is the bomb. And so this whole, like, the whole era, the whole era, I mean, I just watched the new Top Gun movie, right? And like, and they kind of, they alluded to this a little bit in the movie, which is it's like a human being, you know, to train an F-16 or F-18 like fighter pilot, right, is like, you know, I don't know, seven, eight, 10, 15 million dollars. Plus it's a, you know, a very valuable human being. And, you know, we put these people in these tin cans and then we fly them through the air at, you know, whatever mock, whatever. And then, you know, they actually do a good job in the movie of like the GeForce issue, right? Which is like there's, you know, the plane is capable of maneuvering in ways that will actually kill the pilot. And so what the plane can do is actually constrained by what the human body can actually put up with. And then by the way, the plane that is capable of sustaining human life is like very big and expensive, right? And has all these systems to be able to accommodate the human pilot. Look, a supersonic AI drone is not going to have any of those restraints, right? So first of all, it's going to cost a fraction of the price, right? Second is it can be, you know, much smaller, by the way, much bigger. It doesn't need to have even the shape that we are associated today with, like it can have any shape that's aerodynamic, right? It doesn't need to like take into account, you know, human pilot. It can fly faster. It can maneuver faster. It can do all kinds of turns, right? And, you know, acceleration, all kinds of things that human, the human pilot's body can't tolerate. It can make decisions much more quickly. You know, it can generate much more information, you know, per second than any human being can. It can make, you know, decisions very fast. And then by the way, you're not just going to have one of those. You're going to have, you know, at a time, you're going to have 10 or 100 or 1000 or 10,000 or 100,000 of those things flying at the same time, right? And so these things are going to come out of the sky in swarms. There's a, there's one movie that has gotten this right. It's a, it's a movie, it's an action movie. George Butler is called Angel is Fallen. And in the opening scene, there's a terror attack against the US president and the movie played by, of course, Morgan Freeman is fishing on this big, beautiful lake. And he's got all these, you know, secret service guys around. And all of a sudden there's what looked like a flock of birds, you know, big flock of birds coming from the distance. And it turns out they're, you know, suicide drones. And the movie does a really good job of showing, like, what that kind of attack is going to be like. And it's like, this is all, like, this is all under way, like, this is, this is all happening. This is all going to happen. Like, this is just very, very clear. It's very obvious. It's going to happen very broadly. You know, look, the, the, the nation states with the best AI capabilities are going to have the best defense capabilities that, by the way, the DOD is already on this, the DOD, the Department of Defense in the US has already declared that AI is what they called the third offset, which is basically means the future warfare, right? So they're going to completely rotate how they spend money. And so, like, this is it, like, future, you know, if there's ever, if there's ever, I mean, God forbid, but if there's ever war between the US and China, like, this is the form that it's going to take. And so, yeah, it's going to matter, you know, it's like asking who had the atomic bomb, like, it's going to matter a great deal. Do you think, like, I think that that's, you know, the kind of national security implications are obviously huge. I also wonder about it kind of from the other direction, like, will our AIs have American values or American personalities or something like that? I'm sort of wondering about, like, not only the effects, but also, you know, is there is their cultural component to the type of AI we're going to get? Or is that a math problem? And it's that, you know, it's wrong to think about it as having this kind of human overlay. No, so I think it's 100%. So I think it's 100% right. And further more, I guess what, the way I would think about it is, if you look at the fight that's happened over social media, both in the US and in China, right over the last 10 years, right, where both in the US, there's been a massive fight over, like, what values are encoded in social media and, like, what, you know, censorship, you know, controls and what, you know, ideologies are allowed to, you know, perpetuate and so forth. Like, there's been a massive fight on that in the US. And of course, there's a constant running fight on that in China, right, which is, you know, the Great Firewall and they've got all their restrictions and what that will allow you to show if you're a Chinese citizen. And then there's these cross-cultural questions, right, which is this sort of whole thing where people are wondering, it's like, well, you know, TikTok is a Chinese platform running in the US with, you know, American users, especially like American children using it. Like, you know, a lot of people have theories, at least the TikTok is the TikTok algorithm is very deliberately steering US kids towards, you know, destructive behaviors. And is that, you know, some sort of foreign, you know, basically, you know, operation, a hostile operation. And so anyway, like, to the extent that these are all big issues in this previous era of social media, I think all of these issues magnify out by like a million times in this AI area, like all of those issues become just like far more dramatic and far more important and far more profound. And the reason is just because user-generated content is kind of what we're talking about, like user-generated content on social media, people only generate so many kinds of content, they only engage on so many different kinds of issues, whereas AI is going to be universal, like AI is going to be applied to everything, it's going to be involved in everything, it's going to be integral to the healthcare system, it's going to be integral to the, you know, it's like the financial system to education, it's going to permeate education, right. And so it's going to be relevant to basically every field of human activity, where anybody can have an opinion about anything. And then, yeah. Is that what you just described? Is that a case for early and cautious regulation? Is that a case for the impossibility of regulation? Where does that take us in terms of what lawmakers could or should be doing? What would Reason Magazine say about well-intentioned government? Well, actually, we're doing a debate issue coming up, and we have a debate on regulation of AI. And there are people who are, of course, deeply skeptical of governments who still say, well, maybe this is the moment for guardrails. Maybe at least we want to say we want to limit how states can use AI, for instance, even if we don't want to limit individuals. I'll make your own argument back to you, which I know you'll enjoy, which is the road to hell is paved with good intentions. Everybody always wants, it's like, boy, wouldn't it be great this time if we could have very carefully calibrated, well-thought-through, rational, reasonable, effective regulation? Wouldn't that be great? Maybe this time we can make rent control work. If we're a little bit smarter about it, your own argument obviously is like, well, that's not actually what happens, for all the reasons you guys talk about all the time. There's an abstract theoretical argument for such a thing. We don't get the abstract theoretical regulation. We get the practical real-world regulation. And what do we get off the other side? We get regulatory capture. We get corruption. We get basically your early incumbent lock-in. We get basically political capture. We get basically skewed incentives. And then years later, we're wondering how could we've done that to ourselves? And it's because everybody in that moment said, wow, we really need some sort of well-intentioned regulation that we can't actually get. So I mean, look, God bless. I don't know. They'll probably try to get... And then this is the other thing is like, is it good or bad that our government is largely dysfunctional? And generally speaking, can't pass legislation. The libertarian in me says, this is a case where gridlock is good. Look, the straight power politics version of this is we can't even ban TikTok. American social media companies aren't allowed to operate in China. The Chinese company is allowed to operate in the US with total impunity. You do look at the content on TikTok that's going at kids in the US, and you do wonder like what's going on. And there have been repeated attempts to block TikTok in the US, and TikTok is still running in the US just fine. So I don't know. It's like, we can't even ban TikTok. We couldn't effectively regulate the too big to fail banks. Color me skeptical, but yeah, I'm sure they'll try. You've talked a lot, you're on record talking about the quite rapid process through which innovative tech startups become kind of grumpy, enmeshed incumbents, both just with the state and more generally in their business practices. That topic has come up a lot recently with the Twitter files and the sort of revelations of the ways that companies collaborated with, willingly, I think in many cases, but maybe with a looming threat as well with government agencies around misinformation and other questions. It seems to me like we're going to be in for more of that, that this sort of blurring of the lines between public and private is our fate. Is that what it looks like to you? And if so, is that ultimately a thing that threatens innovation or are there ways in which it could potentially speed things along? The textbook view of the American economy is that it's a free market competition and companies are fighting it out and different toothpaste companies are trying to sell you different toothpaste and it's a largely competitive market. And then every once in a while there's an externality that requires government intervention and then you get these weird things like the too big to fail banks or whatever. But those are the exceptions to the general working in the free market system. I can tell you my experience having been now in startups for 30 years is that the opposite is true. Specifically that James Burnham was right that we passed from the original model of capitalism which he called bourgeois capitalism, which is what we still think capitalism is. We passed into a different model which he called managerial capitalism some decades back. And the actual correct model of how the U.S. economy works is mostly a process of oligopolies, cartels and monopolies. And it's mostly them, it's basically big companies forming up into oligopolies, cartels and monopolies and doing all the things that you expect oligopolies, cartels and monopolies to do. And then they jointly basically corrupt and capture the regulatory and government process. And so they end up controlling the regulators. And so most sectors of the economy are a conspiracy between the big incumbents and their putative regulators. And the purpose of the conspiracy is to perpetuate the long-term existence of those monopolies and cartels and to block new competition. So that's where I've come out on. To me that completely explains the education system, both K-12 and the college university system. It completely explains the healthcare system. It completely explains the housing crisis. It completely explains 2008, the financial crisis and the bailouts. It completely explains the Twitter files. Like I think that's precisely what has been happening in tech. And so if you're open to that interpretation of how the world works and how the country works and how the economy works, then like a lot of things start to make a tremendous amount of sense. And then I think that what the Twitter files is, it's basically an x-ray of a specific instance of that happening. And this is just factional. What we now know from the Twitter files is that a very large number of people in government, by the way, some of them political and some of them politicians, but also some of them bureaucrats. And so some of the members of the deep state, which either, as we know, the deep state either does not exist or if it does exist, it's good, one or the other. I think most people... That's what I've heard. Yeah. That's what I've heard. There's a recent book on that that apparently makes that case very clearly. So let's just say the permanent bureaucracy. Or again, what James Burnham would say is the managerial class in government, the permanent cadre of professionals in government who basically manage everything. And those people and then also people on the outside who they fund as their proxies with government money, with taxpayer money, have been exerting enormous pressure on Twitter to block censor dot, dot, dot, dot, and what looks to me like a straightforward constitutional case for deprivation of constitutional rights, first, fourth, and fifth amendments in a way that is clearly illegal, both under the Constitution and under title 18, whatever, 242. There's a specific federal law that says it's a felony for a government official to use the power of being in government to deny citizens of constitutional rights. There's actually, by the way, another law, there's 241 that actually applies that same principle to private citizens, including private companies. And so I think it's possible that there has... I think it's possible that there's actually been criminal activity both on the government side and on the company side. And yeah, and it's just like, yeah, that's been happening. And the Twitter, every new drop of the Twitter files shows that that's what's been happening in a non-political world where we all just read the Constitution. This would be a constitutional crisis. This would be the biggest story in the country. There would be hearings. There would be immediate impeachment proceedings. This would be a five-alarm fire because obviously the government can't be allowed to do this. In the real world, of course, that's not what's happening. And we're back to either denial or embrace under the theory that this is good and proper. Right. Are there sectors that are less subject to that dynamic that you just described in which the startup quickly become the incumbent semester of the state? I mean, look, so I think it's... So my theory basically is the question is always the same question like, is there actual competition? So actually, I think there's like a deeper... My deeper idea here is basically the process of evolution. So the idea of capitalism is basically an economic form of the idea of evolution and natural selection and survival of the fittest and the idea that basically it's a superior product ought to win in the market and that markets ought to be open to competition and a better... A new company can come along with a better widget and take out the incumbents because its widget is superior and customers like it better. And so like for evolution to function properly, you need survival of the fittest, which means you need things to die when they're not the best thing. For capitalism to work properly, you need the same thing to happen. You need companies to die when they are inferior to other companies that are doing things better. That's sort of inherent to the thing. And so the question always is like, is there actual competition happening or not? And part is like, do consumers actually have the ability to freely select among the existing alternatives? And then the other question is like, can new products actually come to market? Can you actually bring a new widget to market or do you get blocked out because the regulatory wall that's been established basically makes that prohibitive? I mean, look, the great example of this is banking, where the big thing in 2008 was we need to bail out these banks because they're quote unquote, too big to fail. And so then there were screams of the need to reform the too big to fail banks that led to Dodd-Frank. The result of Dodd-Frank, I call it the Big Bank Protection Act of 2011, the result of that is that the too big to fail banks are now much larger than before. And the number of new banks being created in the US has dropped to zero because it's now effectively impossible to launch a new bank in the US because JP Morgan Chase has 10,000 lawyers working on their regulatory issues and you have one. It's not possible. You can't start new banks anymore. And so anyway, sorry, I'm repeating the case against. But the question is like, where is that not happening? Where in the market is that actually not happening? Where is their free and open competition? Look, the cynical answer is that doesn't happen in the spaces that don't matter. So toys, anybody can bring a new toy to market. Like it's fine. Sure, great, right? Anybody can, you know, I don't know, like anybody can open a restaurant, right? I would say don't matter being like, you know, these are fine and good, like, you know, consumer categories that people really enjoy and so forth. But as contrasted to the health care system or the education system, right, or the housing system, right, or the legal. If you want freedom, your business better be frivolous. I mean, that would be the cynical way of looking at it. Like if it doesn't matter from a societal structure, right, in terms of like determining the power structure of society and basically the power of the government and society, then yeah, go crazy, do whatever you want. But like if it actually matters to like major issues of policy, right, where the government is intertwined with them, then of course it doesn't happen there. And again, it doesn't happen there not just because the government, it doesn't happen there because of an intertwining of the incumbents in the government. You know, look, I think this stuff is getting look, this is one of these things where I almost have trouble debating it. I mean, not debating it with you, but like debating it with people who argue with me on this because I think it's so self-evident. It's like, well, why aren't there, you know, it's like, why are all these universities like identical? Like, why are all of the major universities implementing the exact same like crazy? Like, why do they all have like identical ideologies, right? Why isn't there like a, why isn't there a marketplace of ideas at the university level, right? And it's like, well, that becomes a question of like, why aren't there more universities? Okay, well, and I can tell you why there aren't more universities, there aren't more universities because to be a new university, you have to get accredited. The accreditation bureau is run by the existing universities. Like, it's sitting there in plain sight. I'll give you another example. Why do healthcare prices do what they do, right? Why do healthcare prices work the way that they work? A major reason for that is because they, because basically they're paid for by insurance. There's private insurance and public insurance. The private insurance price is just key off the public prices because Medicare basically drives the whole thing because Medicare is the big buyer. So how are Medicare prices set? They're set by a unit inside HHS called CMS. And CMS runs literal Soviet style price fixing boards for medical goods and services. And so once a year, there are doctors who get together in a conference room at like a Hyatt in Chicago somewhere. And they sit down and they establish, they fix, they do the exact same thing that what was the unit of the Communist Party in Russia that used to do this in the Soviet Union. There's a term for it. There was the central price fixing bureau, right? So the Soviets had a central price fixing bureau. It didn't work. We don't have that for the entire economy, but we have that for the entire healthcare system, right? And it doesn't work for the same reason that the Soviet system didn't work, right? And so we've exactly replicated the Soviet system respecting better results. It operates in plain sight. You can go on the CMS, CMS as a website. They'll explain this all to you. It's operating in plain sight. Everybody thinks it's a great idea. And then, you know, lots of people are calling for, you know, increased government, you know, centralized, you know, control and purchasing of healthcare, which would make that system stronger. And so it's like, we know that this is not going to work. We know that it's going to only result in restrictions of supply and rising prices. We know precisely what the outcome is going to be. We seem perfectly happy with it. And then we complain about it. So. Yeah. I want to pivot a little bit and ask sort of in this vein of, you know, everyone is trying to build a system that does good. And they seem to want the government to do it all the time, but a little bit differently. What has made money for you or for your investors that you think has also done the most to make the world a better place? Oh, I mean, it's even hard to. I mean, we all have a narrative on that. Take your favorite among all your children. My narrative is spectacular. So I mean, look, basically, material progress only ever happens through technology. Like there's basically only two ways to make material progress on planet Earth. There's only two reasons why we're not all living in mud huts, right, and some systems farming all day, which is what we used to do, right, which is what our predecessors did. And there's only two reasons. One is natural resource extraction, right? And then the other is technology. And those are the only two, those are the only two levers on the world, right? Those are the only two ways to raise standard standard of living. It's the only possibilities. And so and, you know, natural resource extraction is good up to a point, but like at some point, you know, you at some point, you need to do something with the resources. And so that immediately also becomes an application of technology. And so, you know, on the materialist side, right, which is the obvious, the more obvious one, which is just standard of living, you know, standard of living in the US is up like 8x in the last, you know, whatever it is 75 years or some crazy number like that. Like it is true, you know, the engine of technology routinely is producing, you know, enhancements to standard of living. And of course, there are the thought experiments very easy, which is, you know, notwithstanding whatever problems people have with, you know, current modernity or postmodernity or whatever we're living in, like, would you switch places with, you know, your predecessor 50 years ago, you know, from a material well-being standpoint, and, you know, the answer to that for basically everybody is no. And of course, we also know that because, you know, you could build cities and towns today if you wanted to, where people voluntarily, you know, didn't use technology made after 1950 or something. And, you know, the Amish do that, but like, you know, nobody else does it. And so, you know, material standards of living are, you know, continued to advance. I think that, and I think that's a perfectly fine, a good argument. And then look, I also think the other side, from in terms of my philosophy, there's like a human freedom, right? And flourishing aspect to it, like the, you know, the sort of intellectual or maybe even the spiritual side. And, you know, there is just like, okay, like, are people more free to, like, learn and explore and express themselves and be themselves and find like-minded people, right? And discover their calling in life, right? And, you know, if they're creating something, bring their creation to market and find the audience for it or the market for it, like, you know, all of the, like the ability of sort of humans to self-actualize, like, is that expanding over time or diminishing over time? And I think, like, overwhelmingly the impact of the internet has been to expand that. Now, there is this, you know, giant war that's happening where there's lots of people who are trying to, like, shut that, you know, shut aspects of that down and control that. But nevertheless, I think everybody would sort of have to concede, like, the internet has made it much more possible for a much broader, you know, range of voices, a much broader range of ideas. I mean, you know, I read and see things on the internet today that, like, you know, 25 years ago, you know, you would have been in a card, you know, you would have had to go deep into the card catalog at some, you know, giant university campus if you could have gotten access to it to find some book that was written in, like, 1820 or something, you know, and today it's just like tap, tap, tap, Google search. Oh, I can, you know, I can find out all, you know, I can find out all about this and then I can write about it myself. And so, yeah, you know, people, people actually do have, like, a much greater level of, let's say, personal freedom. You know, look, having said that, like, the war to try to control that and choke that off is also very real. What do you think is a better use of time and resources, trying to start a company or giving away money to the worthiest cause? Oh, I mean, I think overwhelmingly the former. It's overwhelmingly the former. So, and this is, and this is an argument, right, that very, very few people in our modern, you know, kind of society, modern, you know, whatever civil religion are willing to make. But, you know, I'll make it, which is the production is a good of itself, right? Like, you know, the person who makes your toothpaste is doing, you know, the company that makes your toothpaste is doing something for you, right, that is superior to that of, you know, most people doing health philanthropy, right? Like, production of the goods and services that make our lives better, right? And at the limit, you know, the goods and services that, like, cure diseases and so forth, you know, they're generally for-profit companies. The fact that they make money on those things is just reward for having improved the world. You know, I think most of the actual material progress in the world, most of the improvement in human welfare is done through for-profit production. So, there are people who are like, basically capitalism, you know, building your company is a necessary evil to generate the money with which we can then make the world better. I'm like, no, that's not the case. Like, most of the good you're going to do in the world is going to be through building your company. And then maybe you're going to find a way to do some philanthropy thing. You know, hopefully the philanthropy thing that you do is not going to make the world worse and not better, which is often an interesting question. Here's the other reason I believe that so strongly, which is, and again, this let's assume some sort of properly functioning capitalism, right, with some level of effective competition in the market, you know, production in the for-profit system, like the products succeed or fail, they're adopted or they're not adopted on the basis of whether people want them, right? And whether people think that they make their lives better and people want to engage in that trade. And so, there's a direct reality test with every transaction, right, as to whether this is actually something that people want and whether this is something that people think will improve their lives. You know, there is this massive issue with philanthropy, which has always been an issue and it continues to be an issue, which is, it's just, it's not subject to a market test. And so, and there are, you know, reams and reams and reams of examples and studies of philanthropic programs that, you know, thought that they were going to, you know, lead to benefit, you know, lead to some sort of beneficial outcome in some area. And then it turned out they, you know, backfire horribly, right? And, you know, there's an obvious one just kind of staring us all in our face today, which is, you know, criminal justice reform, right? You know, there's been this wave of philanthropists, many of whom I know over the course of the last 20 years who have, you know, spent an enormous amount of money putting in place a new set of politicians and a new set of political ideas that have resulted in letting the criminals out of jail and as a consequence, you know, for justice, and as a consequence, like crime is through the roof and it's like not safe to be in the streets. And it's just like, but like there's no market test, like the people who have spent all of that money, right, to let all the criminals out, like they're not subject to the risk of all the criminals being out because they all have like Navy SEAL teams protecting them, right? Because they're all like rich, right? And so they're able to just like arbitrarily like ruin society, you know, through philanthropic efforts with no corrective mechanism whatsoever. And so I, you know, if anything the world we're in today, like the dichotomy between like the actual good at production and then the actual evil of sort of the unintended consequences of sort of untested quote unquote philanthropy, like I think those are gapping out in a pretty profound way. Again, it's hard for me to even talk about this with most people because I just, I think it's kind of obvious. It's like walk down the street in San Francisco, it's pretty obvious what's happening. And yet like the level of denial among a lot of people I talked to about this is like through the roof. So there's also the kind of, I mean, this sometimes shows up in the kind of effective altruist critique, right? Where you say, well, you know, why give to the opera when people are dying of malaria, right? That's sort of the previous effective altruist critique, I guess. What do you give to, if anything? Well, by the way, the malaria thing, this goes to the nature of the problem, right? The nature of the philanthropy problem, the nature of the effective altruism problem, which is right. Okay, fair enough. Like don't give to the opera because people are dying of malaria. That's okay. What do we do to keep people from dying from malaria? Well, we give the bed nets, right? The bed nets are like treated with chemicals that like, you know, prevent the mosquitoes from being able to, you know, to be able to, you know, to spread malaria. Okay. But now we've given people, you know, nets that are treated with chemicals and then they turn around and they use those nets to go fishing, but they use the, you know, there's, and then the chemical treatment on the nets ruins the fishing environment, right? Like, and now people starve, right? And it's like, by the way, maybe that happens and maybe it doesn't, but like it's happening 8,000 miles away, right? In a way that you're never going to even know like what's happening. And so these like cosmic big brain utilitarian EA ideas that people are going to like have an Excel spreadsheet where they're going to figure this stuff out or they're going to sit on the camp, you know, campus at Stanford or Oxford or something and have these ideas that it's going to translate into the real world. I mean, my favorite example, I'm sure you're familiar with this, but my favorite example I keep trying to get all my friends to read is the Moynihan report from 1965. So, right? Daniel Patrick Moynihan wrote this report about the rate of single parenthood in the US in 1965 and he sort of, a lot of social scientists will tell you like a huge number of life outcomes at the very least correlate to whether you grew up in a double parent or single parent home. And so the Moynihan was like the leading light of like the liberal progressive, you know, reform movement in 1965 with LBJ with a great society. And you have the full resources of the government, you know, like a limited capability consultation with experts. And he wrote this report and it was like, we have this huge crisis because of the percentage of unwed, you know, births. And then as a consequence, we're going to implement all of these incredible programs, both government programs and philanthropic programs, you know, to fix this. And then fast forward, you know, sitting here today, 50 years later, you know, the percentage of unwed births has tripled. Right. And by the way, with no accounting, right? Like with no accounting, no accountability. He's dead. He doesn't have to live with the consequences. Nobody's looking at this. Nobody cares. You know, maybe you guys write a story on it or Fox News. You know, somebody does it, but like nobody cares. Like and those programs continue on autopilot forever. And so I just, you know, I just find it's like this remote control aspect to it. It's just like incredibly disturbing. It doesn't worry. And again, I go back to this is the thing like any business, any for-profit business cannot, any legitimate for-profit business that operates in a true competitive market cannot do this. The market will prevent them from doing this because if they do this, the market will stop them. The market will say, no, I do not want you to do this anymore. I will not buy this thing anymore. Right. And you will go away. Right. And there's no equivalent pressure whatsoever on a foundation or a philanthropist. So anyway, what do we do? So specifically what we do is we basically stick very, very close to home. And specifically what we do is basically local social infrastructure. And in particular, we do emergency medicine. And so we do and specifically at Stanford. So, you know, we basically fund Stanford emergency medicine, the whole program there. And, you know, basically under a couple of theories. Number one is, you know, whatever you feel about our healthcare system, it is true that at three in the morning, you know, when your kid has a fever of 105 or, you know, somebody, you know, has a serious problem, you know, everybody's kind of equal at the ER at three in the morning. And so, you know, it is the ultimate backstop for, you know, for medical care for everybody. And then the other is like any given day, I can go sit in the ER and I can see the patients. Right. And I can see the outcomes. Right. And I don't have to wonder whether, like, this is a good idea or a bad idea. Like I can see the benefit. Like there's no remote control aspect to it at all. It's like right there. You know, it's like 10 minutes from my house. And so, you know, that's the kind of thing we do. And it's to at least try to live up to the spirit of, let's say, trying to make the world a better place, but also trying to do it in a way where we can actually prove that we're actually doing that. You know, you said, I think about 10 years ago now, that Bitcoin is as important as the internet was. And we've had a little time for that to play out. It's been tumultuous. But I guess on the theme of this time it's different. How is that prediction looking to you? You've done pretty well financially with your investments in that space, but not everyone has. Yeah, well, so there's a bunch of things. So there's a bunch of things in there. So, so just the thesis of what I wrote then, which I still believe. So the, I wrote this piece as a New York Times column. It was a New York Times column back when the New York Times would run things that I write, which by the way, in case you're wondering is no longer true. They ran it at the time. And so everything in there, I still agree with. The one modification I would make is I at the time it looked like Bitcoin was going to evolve and Bitcoin itself was going to evolve in a way where it was going to be used for many other things. We at the time thought it was a general technology platform that was going to evolve to be able to make a lot of other applications possible in the same way that the Internet did and the same way that iPhone did and so forth. It actually turned out that didn't, that didn't happen. Bitcoin itself just basically stalled out. It basically stopped evolving. But what happened was a bunch of other projects emerged that sort of took that place. And the big one right now is Ethereum. It's sort of the next derivation. And so if I wrote that thing today, I would either say Ethereum instead of Bitcoin, or I would just say crypto, or Web 3 instead of Bitcoin. But otherwise, all the same ideas apply. Look, the argument I made in that piece that I 100% believe today is basically, and I'll use the terms kind of interchangeably, like crypto, Web 3, blockchain, or use those terms kind of interchangeably. They're the other, what I call the other half of the Internet, the Internet. It's all the functions of the Internet that we knew that we wanted to have when we originally built the Internet as people know it today and the Web as people know it today. But it's all the aspects of basically being able to do business and be able to have money and be able to do transactions and have trust. We did not know how to use the Internet to do that in the 90s. And now with this technological breakthrough of the blockchain, we now know how to do that. We have the technological foundation to be able to do that. And so that there is basically as follows, which is basically have a network of trust, a digital trust that is overlaid on top of the Internet, which is an untrusted network. Anybody can pretend to be anybody they want on the Internet. Anybody can look up the Internet, do whatever they want. But it's famously untrusted. Crypto Web 3 creates layers of trust on top of that. And then within those layers of trust, you can represent money. But you can also represent many other things. You can represent claims of ownership. You can represent everything from house titles, car titles, insurance contracts, loans. You can represent claims to digital assets. You can have things like unique digital art. You can have a general concept of an Internet contract. You can actually strike contracts with people online that they're actually held to. Just an example, you can have Internet escrow services. So for e-commerce, you can have a service. You have two people buying from each other. You can have actually a trusted intermediary now that is Internet Native that has an escrow service. So basically it's like you can build on top of the untrusted Internet all of the capabilities that you would need to have a full economy. And sort of the broadest definition of that. In other words, like a full global Internet Native economy. And that's a giant idea. The potential there is extraordinarily high. And so yeah, we're midway through that process. We have funded, and there's lots of others, but there's tons of really smart entrepreneurs who are basically going after every aspect of what I just described. A lot of those things have worked. Some of those things haven't worked yet, but I think that they're going to work. Look, having said that, these things are, every new thing we do is by definition, it's called venture capital for a reason. Like these are ventures. You can add the prefix AD. These are adventures. Big question people always ask us, are you speculating? And I'm always like, well, it's like I don't even know what speculating is. Am I investing in something without knowing how it's going to go? Yes. Is that speculation? Yes. Is that every investment? Yes. Yes. We're investing in a future unknown state. Sometimes these things work. Sometimes they don't. Sometimes the prices go up. Sometimes the prices go down. We never make price predictions. We never predict that Bitcoin price or whatever is going to be X or Y or Z at any given point. I have good friends of mine who still haven't gotten a message on this and they still call me up and they're like, Mark, what's Bitcoin price going to be next month? And I'm like, I don't. It beats me, man. Go look in the chicken entrails. I don't know what to tell you. So the prices whip around through the process of the market trying to establish value. Crypto looks more volatile than a lot of other categories just because the assets sort of list publicly in advance of, when we back a private company, it usually doesn't go public. If it goes public, it'll go public like seven or eight or 10 years later. Crypto project, crypto tokens start to float much earlier than that. So it's almost as if startup equity was being traded in the public market from the very beginning. Of course, those prices would be highly volatile. A critique of venture capital is that we present to our investors, people accuse us of presenting to our investors what looks like artificially suppressed volatility because the prices of the things that we invest in, like in actuality, if they were trading on the public market, they'd be whipping all over the place. But we just hold them privately. We mark them once a quarter. We don't even know what they're worth then. We just let the accountants make some guess. Whereas the crypto assets just they trade publicly. So they look more volatile. I actually don't think they're more volatile than the other stuff that we do. I think that this is just all, this is just the nature of doing things that are speculative. We never recommend people invest in anything like if people want to invest in crypto assets, they should, but they should understand that the level of volatility is going to be very high. And I think the same thing would be true of anything, quite honestly, that they invest in. I want to talk about the kids these days since you mentioned unwed pregnancies. And those numbers certainly like teen pregnancy numbers recently are down quite a bit. Some people take that as evidence that maybe the kids are not all right. They're on their phones all the time and they're not having human interactions or something. And so it's bad news that the kids aren't having sex and doing drugs. How are the kids these days? Are the kids all right? You sort of mentioned earlier education is one of those sectors that is very, very locked down, very, very limited in terms of innovation, hard to break into. So how are the kids doing? And if not, well, what can be done? Well, some of the kids are pretty messed up. I don't know if you've been noticing, but some of the kids seem to have some issues. So I will concede up front, like there do seem to be some problems. So let's see a whole bunch of things in there. So first off, I'll say just the enormous irony. I feel like there's enormous irony. So when I was a kid in the 80s and when I was a kid, it was this constant moral panic, constant media pressure of the kids were totally out of control. And it was literally like the kids are all on drugs and the kids are all having casual sex and the kids are basically spiraling into nihilism and self-destruction. And so I will concede there is a lot of irony to be sitting here today. And now there's this other critique from the same, at least the same kind of person basically saying, oh, the kids aren't doing enough drugs and having enough sex. And I'm like, okay, well, what do you want? What do you want? You told us to stop doing those things. And we did. Okay. So yeah. So look, there's deeper questions underneath that. Look, the birth rate question, I mean, I'm sure you know this, but the birth rate thing, like birth rates are crashing all over the world. So the most interesting thing about the birth rate phenomenon, and by the way, I think it's great that Elon has been talking about this in public and really elevating this because it's a really big question. But like birth rates are crashing in the US, birth rates are also crashing in Europe, and they're also crashing in Japan, and they're also crashing in Korea, and they're also now crashing in China, and they're crashing in Iran, right? And so like whatever is happening is not happening because of quote unquote, you know, whatever American, you know, this that or the other thing or even Western, you know, or at least not like, you know, just kind of facially, like whatever is unique or especially different about Western culture, like this is a broad based phenomenon. You know, I'm not an expert in the thing, you know, the sort of best analysis I've read seemed to indicate that it's more or less a natural consequence of, you know, much, much kind of a combination of like much longer periods of education sort of coupled with much higher levels of female engagement, both in the education system and in the workforce. And if you sort of fully empower everybody, both, you know, men and women to kind of fully self actualize in the form of basically unlimited amounts of education coupled by unlimited professional opportunities. You basically, it's like what I was saying earlier, it's like you just make life more interesting for people in a way where, you know, they're perfectly happy running through their 20s and 30s, you know, without the obligations that come with having children. And then of course, by the time they decide they want to have children, you know, if they're in their mid or late 30s, it might already be too late. So I just go through that to say like, I don't even know, is that good or bad? I don't know. Is it Western or Eastern? It seems to be both. I mean, the China one is fascinating. Like, you know, the numbers just came out, the birth rate in China is crashing even faster than expected. You know, it looks like they may already have passed their demographic peak, right? Where from here on out, the Chinese population may only age, right? And by the way, if you think we have a, you know, to people to people who think we have a problem with the birth rate, like Chinese heavy, the Chinese having a real problem. Because like, you know, it's one thing or you contrast like Japan and China, right? Like, Japan has a lot of old people, but it's extraordinarily wealthy society because of all the economic development that happened between like 1950 and in the present, like into basically a society as economically advanced as ours or maybe even more so. You know, China's not bad. China's still in the middle income zone, you know, sort of 5,000 per capita GDP or something like that. There's no economics on earth that can explain how they're going to take care of all their old people, you know, in the time frame that they have to get ready for that. So, yeah, this seems to be like a big issue. I mean, look, are we going to reverse any of this? Like, are we going to, you know, go back to a model where people, you know, don't have all these years of education? You know, by the way, maybe we should, we're probably not going to. Are we going to, you know, deprive, you know, women of the professional opportunities they have, you know, to go back to an era where they don't have those opportunities? Probably not. Are we going to, you know, what else? Are we going to become less, you know, sort of socially progressive and, you know, elevate motherhood more as sort of a social good? Probably not. That doesn't seem to be in the cards. You know, another possibility here is, are we going to have scientific breakthroughs? Right? So, you know, this, everything we've talked about up until now is sort of assumed that sort of the natural progress of human birth and reproduction, the old-fashioned way. You know, look, are we going to have, you know, cloning of embryos? Are we going to have, you know, external, you know, gestation machines? You know, are we going to be able to have, are we going to have embryos created by stem cells? Right? Such that, you know, you can, you know, you could decide on your 60th time to have a new baby and you could, you know, have it, you have the embryo cloned off your stem cells and then have it, you know, have the baby raised in a gestation tank for nine months and the outcome comes, the other end comes a baby. Like, maybe. Is that, you know, are those technologies being developed now? Yes. Are they going to freak people out? Yes. Definitely. Are they necessary for the perpetuation of the species? Quite possibly. Yeah, so this is kind of my temptation always is to say we are going to technologically innovate our way out of our sociocultural problems. And it obviously doesn't always work like that. But I mean, you've said the future relies on 19 year olds and the startups that no one's ever heard of yet. And those 19 year olds are going to have to come from somewhere, right? Maybe we don't need to go all the way to like growing them, you know, growing them in a vet from stem cells. But it might help if what we need is kind of peak weird 19 year old to get through this, right? Yeah, look, there's that question. And then look, there's another giant question that's coming up right behind that, which is, you know, the sort of the big genetic engineering question, right? And so like, for example, like this is not on the horizon right now, but like there's very, very, you know, cutting edge work happening in the genomics field to identify all the genes that are associated with intelligence, right? And they're saying it's already we're up to like, it's like, we already have up to like 200 genes that correlate to like half of IQ. And so, you know, let's assume we're moving into a world in which we're increasingly going to be engineering babies, which I think is a real possibility for the reasons we just discussed. Are we also going to be optimizing them? Right? Like, are we going to be like, you know, is there going to be a CRISPR treatment that you can just dial up an order in 10 or 20 years, you know, where you can give your, you know, you can give your stem cell, you know, clone embryo 20 or 30 point IQ boost, right? Right. And so, and so, so maybe on the other side of this are like, you know, these sort of super babies, right? Where we come out the other side of this and we sort of technically engineer ourselves, you know, not just the perpetuation of the species, but like an upgrade of the species where all of a sudden the babies are all coming out like much, much smarter, right? And then, you know, as a consequence, they're able to, you know, do all kinds of things better, you know, kind of through their lives, like, you know, so maybe there's a technological utopia on the other side of this. You know, on the other hand, maybe this is some sort of catastrophic moral, you know, philosophical, civilizational collapse, right, where we've given up on sort of the inherent spirit of what it means to be human. And we're going to use, you know, we're just in like, terminal decline. And we're, you know, we're going to use technology, you know, sort of the cyberpunk view of it, we're going to use, you know, technology as sort of the Band-Aid, you know, to kind of cover up what otherwise is kind of, you know, a hard crash, right? I don't know. Like, I don't know. I will say, like, these issues are real, like, these issues are real, and they're forming up. And, you know, everything you and I just talked about, like, it's all real, it's all going to happen. And these are very big questions. And, you know, either people do not want to talk about them today, or if they totally freak people out, people get really mad. So, at least, I have yet to read or hear a, I don't know, I guess what I would say is a clear, dispassionate, non-ideological, like, you know, full summation of all this, like, and yeah, maybe one of the... Full of bringing a lot of priors into this conversation. It's a little hard to... Yes. Yeah. Are those sectors where you think, where you think that there's currently the right amount of investment, insufficient investment, too much investment because there's hype? Where are we? Where are we there? Yeah. So, probably, well, some investments. So, the real answer to the question is there's actually two kinds of technology investment, and they're actually very different. So, there's, you know, the term that gets used is called research and development. But really, those are two different things. So, there's research, and then there's development. And those are two different things. And so, the research is basically funding into scientific research. And basically what that means is funding into really smart people pursuing really deep questions around technology and science, such that they may not even know what... They may not have any idea yet of what commercial relevance or what kind of product could get built on it, or even whether something can work. And so, it's the classic concept of basic research. And then there's the other side of it, which is what we do, which is the development side. And so, the way we think about it is, by the time we fund a company to build a product, the basic research has to be finished already. There can't be open basic research questions, because otherwise you have a startup that you don't even know whether the thing will ever actually even be able to build a thing. But then also, it needs to be close enough to a commercialization that within five years or something, you can actually commercialize it into a product. So, anyway, it's sort of fairly well understood that those are the two sides of things. That formula worked really well in the computer industry. So, there was 50 years of basically government research into information science, computer science, during and after World War II. And then that translated to the computer industry, software industry, internet networked. By the way, that also worked in biotech, right? And so, that worked for NIH, had this big sort of sustained biomedical research program that resulted in the biotech industry, which most recently resulted in things like the mRNA vaccine. So, that worked. Those research programs continue. Those are the two main areas of, I think, actual productive research happening. So, those are the two main areas where we can get results off the other side. Should there be more funding in the basic research? I mean, almost certainly, there should be. Look, having said that, the basic research world has a very profound crisis underway right now, which is this, let's say they call it the replication crisis, which is it turns out that a lot of what people thought with basic research is actually just basically even fake and arguably fraud. And so, there is a, among the many problems that our modern universities have, there is a very big problem where most of the research they're doing does seem to be fake. And so, it's like, would you recommend more money to be put into a system that's just generating fake results? No. Would you argue that you do need basic research to continue to get new products out the other end? Yes. So, that problem ought to get dealt with. Maybe it's starting to get dealt with very slowly. On the development side, I'm probably more optimistic. I think, yeah, generally, I would say generally we don't lack for money, which is to say the venture capital industry, like my industry venture capital funding of startups and funding of new tech. Like, I would say we as a sector have plenty of money. There's plenty of investors who want to invest. The companies seem to get perfectly well funded. I think basically all the good entrepreneurs get funded. I think that all the good companies get funded. I don't think there's a shortage of money there. The main question on that side of things is not so much the money. The main question goes back to our question about competition and how markets work, which is in what fields of economic activity can there actually be startups? Specifically, for example, can you actually have education startups? Can you actually have health care startups? Can you actually have housing startups? Can you actually have financial services startups? Can you actually do a new bank? Can you do a new online bank that works in a different way? For those fields where you would want to see a lot of progress, the bottleneck is not whether we can fund them. The bottleneck is literally whether the companies will be allowed to exist. And so on that side of things, that's what I would focus on. I think there are sometimes places where you might have said, listen, it's settled wisdom that you can't have a startup in this area. And then it turns out you can. And I'm thinking here, I guess, of space. I'm thinking of maybe to some extent in some subsets of education. And of course, also, I would put crypto in this category. How can you compete with money? And then here we are in a quite robust competitive market that is trying to compete with money. How do you go from actually competition doesn't look very possible in that space to someone tries it? Yeah, so SpaceX is probably the best. SpaceX is probably your best case scenario. You talk about crypto also as an example, but let's just take SpaceX as an example. Talk about a market that's dominated by the government and has regulations to the moon, literally to the moon. It's one of these things that I don't even know the last time anybody tried to do in their launch platform. And then the idea that you're going to put all these satellites up there, there's massive regulatory issues around that. And then just like the complexity, on top of that, Elon wanted the rockets to be reusable. So he wanted them to land on their rear ends, which is like something that people thought was impossible. All previous rockets, you basically, they're one shot and they're done, whereas his rockets get reused over and over again because they're able to land themselves. And so, yeah, I mean, look, SpaceX climbed a wall of skepticism its entire way, and he basically just brute-forced his way through it, and he and the team, they're made at work. And so that's the best case scenario. There is a playbook that he used, which we have now that he's done that to give him full credit. We've studied that playbook, and so we're attempting to replicate it in other fields. The big thing we talk about there in our business is just like, look, that is a much, much harder entrepreneurial journey. What the entrepreneur has to sign up for to do that, and the risks that are involved are just like much harder than like, for example, starting a new software company or something. And so it's just a much higher bar of like competence that's required. It's just much higher risk. You're going to lose more of those companies because they're just going to like not be able to make it. They're going to get blocked in some way. And then you need a certain kind of founder who's willing to take that on. And that founder looks a lot like an Elon Musk. Or by the way, it looks like a Travis Kalanick, or it looks like an Adam Newman, or it looks like, or by the way, in the past, it looks like Henry Ford. This requires a till of the hun. Alexander the Great, Genghis Khan. To make that kind of company work requires somebody who is so smart and so determined and so aggressive and so fearless, and so resistant to injury of many different kinds, and so willing to take on just like absolutely cosmic levels of vitriol and hate and abuse and security threats and all the other crazy stuff. And so speaking of growing people, we need more of those people. I wish we could find a way to grow them in tanks. We don't know how to grow them in tanks. We spend a lot of every day in our day job trying to find those people. We call it the problem of missing elons. It's just like, we need 10 more elons, and then we need 100 more elons, and then we need 1,000 more elons. And it's just like, I can just tell you there are not many elons running around. And Elon, people have, as you well know, but people have all kinds of reactions to Elon. These people put up with, now look, Elon's been very successful. He's made a lot of money and so forth, but like he puts up a lot of shit from a lot of people on a lot of topics that would cause most people to melt into a little puddle. And so this is like the highest engagement. I mentioned the Top Gun Story, the Top Gun movie. So Tom Cruise once said, he said, Tom Cruise once said this great quote, he said there's only four professions that are really, that are really like fully masculine, that are like real men should actually pursue. And as he said, it's a fighter pilot, rock star, movie star, and president of the United States. So in theory, he's done three of those, two of those, three of those if you count the movies, because it depends on how you count. Certain other people maybe have done more of those. Maybe the fifth is high octane entrepreneur, operating in a regulated industry who just is determined to just like punch through the walls. You know, category number five is Elon. And so anyway, I just said like, this is the prize, like in our world, this is the prize. Like this is the prize. We wake up every day hoping the next Elon walks through the door. Generally, we get very smart entrepreneurs walking through the door. They're really good at what they do, but they're not the next Elon. Every once in a while, they ask us what would it take to be the next Elon? And I describe what it would take and they get this like increasingly disturbed look on their face. And then they kind of decide they don't want to do that. Is this kind of the classic like, oh, if you have to ask the price, you can't afford it situation? Well, there's this question then, which is like, are these people like it was, you know, keep going to the Elon example, like, you know, you know, nature versus nurture, right? You know, was he, you know, was he, was he born that way? Was he trained that way? Right? Culture, you know, you know, you know, for sure, he's like, you know, super smart. You know, there's a big nature component to that. He's super disagreeable. Right? He's maybe the most disagreeable person on the earth on planet earth right now. And that there's a big, you know, kind of nature component to that. But then look at the, you know, he, he, you know, he's, he's articulated his life story, like there are plenty of things that happened to him that caused him to develop an incredibly thick shell and to be incredibly determined and to not want to give up on things. And so, you know, a lot of that probably came from his, you know, cultural, you know, sort of culture, a culture rated kind of experiences traveling through life. And then, you know, look, part of it's just like simple flat out determination, like he just applies himself, you know, with a level of effort and focus, you know, and just every day, you know, he works around the clock, like it's really amazing to watch, you know, I mean, we're used to work at Hawleys and he's like at a whole different level. And so, you know, I don't know, yeah, could, you know, could you take more people who kind of have the right, you know, the right stuff, you know, in terms of being smart and disagreeable and intense, and could you train them up better? Probably, you know, we try, you know, that is kind of what we try to do. You know, that said, like, you know, there's something special going on there. Why do you think it is that there is this kind of special category of like obsessive anger that's directed at particularly the entrepreneurial billionaire? I mean, we're talking about, you know, U.S. senators tweeting billionaires should not exist, right? Like this is it's sort of it's very acceptable to say it's a widely shared view. Where does that come from? Why is it? Why is that like a meme in our culture? Yeah, so I think it's all in Nietzsche, to be totally honest. I think it's what he called resentiment, which basically is basically it's what he called the toxic blend of resentment envy and bitterness. And it's sort of it's the cornerstone of sort of modern culture, right? It's the cornerstone of Marxism. It's the cornerstone of, you know, progressivism. It's, you know, it's the cornerstone kind of thing, which is like we were that, you know, we were that people who are better than us, you know, different societies. Christianity too, right? Yeah, Christianity, you know, the last will be first and the first will be last, you know, a rich man, you know, would sooner pass through the eye of a needle and enter the kingdom of God, right? Like, you know, Christianity is sometimes described as sort of the final religion. You know, Christianity is described sometimes as the final, it's the last religion that can ever exist on planet Earth, because it's the one that appeals to victims. And the nature of life is there are always more victims than there are, you know, winners. And so the victims are always the majority. And so therefore one religion is going to capture all the victims or all the people who think of themselves as victims or want to identify as victims. And, you know, that by definition is the majority. You know, among lower class societies, sometimes, you know, lower class, you know, social sciences sometimes refer to a phenomenon called crabs in a bucket. We're in a lower class environment. If one person starts to do better, the other people will drag them back down. Right? You see, this is like a big problem in education in lower class environments. One kid starts to do good and the other kid starts to bully him until he is going back, you know, until he's no better than the rest. Scandinavian culture, you know, that I come out of, you know, there's a term tall poppy syndrome, right? You know, the tall poppy gets whacked. You know, the tall poppy growing in the field, you know, gets his head cut off. You know, the Japanese have their own version. You know, this again, it's almost like a cross, it's like a very deep human nature thing. It's basically just at the very deepest level, it's N.V. and resentment. And, you know, the nature of N.V. and resentment is like, they're very satisfying feelings. Like, you know, resentment is like a drug, right? Like, resentment is a very satisfying feeling because it's the feeling that lets us off the hook. Right? Like, if I could resent somebody else, then it's not my fault. It's their fault, not my fault. I'm not bad. They're bad, right? And if they're more successful than I am, it just proves that they're worse than I am. Right? Because obviously they must be immoral. They must have committed crimes. You know, they must be making the world worse. You know, they must be a destructive force. And so, yeah, look, it's very, very deeply wired in. You know, I guess I'll say this, you know, the entrepreneurs, the best entrepreneurs we deal with, they have no trace of it at all. You know, the best entrepreneurs we deal with, you know, would just think the entire concept is just absolutely ridiculous. Like, why would I spend any minute thinking about whatever anybody else has done or whatever anybody else, you know, thinks of me? Like, that's crazy. Like, I'm just gonna, I'm gonna make progress in my own life today and the rest of what people think doesn't matter. But, you know, the very nature of it, I mean, you know, look, you know, you should call it slave morality, right? Which is, you know, the morality of the slave. And of course, the irony of slave morality is slave morality is, you know, in the modern world is taken on primarily by people who are not actually slaves, right? And so it's people who are choosing to have the morality of the slave, even though they actually are, they even though they actually are free to also not do that. And, you know, if you're immersed in that world, or you've been brought up in that world, or you've been trained into that world, or it's reinforced your national inclination to be in that world, it's really hard, you know, the message that comes across that says, no, actually, no, actually, the things that are going wrong in your life are not because other people are doing better this because like you're spring everything up. Like, you know, who wants to hear that? What are you reading, watching, listening to? And what do you think? What, what new ish book is being slept on right now? What's a new ish book that is good and underappreciated? Let's see. Well, so I mentioned Burnham. So I can't help myself. So for anybody wanting to kind of really understand kind of this, like the thing we were talking about earlier about like the different models of capitalism, you know, so there's kind of two really key James Burnham books. One is called The Managerial Revolution, which, you know, kind of describes the sort of evolving shape of capitalism through the 20th century, which I think is probably the best explanation for what happened to get us where we are. And then there's another book called The Machiavellians, which is about kind of the structure of politics and society in the sort of, you know, sort of democratic world. So those are really good on Nietzsche. So I've actually been reading my way all the way through both Nietzsche and Schopenhauer, which is very interesting. But there's a small book on the topic of resentment we were just talking about. There was a philosopher in the 1910s, 1920s named Max Schaler who wrote a book called Resentiment, R-E-S-S-I, Resentiment, R-E-S-S-I, Intimate. And he basically, it's a short book. It's on Amazon. And he basically describes Nietzsche's theory of resentiment, which we just talked about, and then he sort of elaborates on it. And that's really, you read that book and you're like, oh my God, like yes, I'm surrounded by people just like that. So there's that one. What else? Let's see, give me a second here. I just read, have you run into John Murray Cuddehy? No. If you're into this guy, so I just read his two books. Very obscure. They're both online. You can get them both online in the Internet Archive. But they're sort of post-Nietzschean. They're actually sort of on the same topics. It's basically, they're on the topic of basically what happens when different societies basically clash, when societies encounter each other. And then some societies are more developed than other societies. Some societies are more like materially prosperous than other societies. And basically, what is the reaction that happens from that? And what are all of the different coping mechanisms and changes that happen as a result? So his two books are both fantastic from the 1970s. So I'd recommend both of those. More recent books. Okay, I'll give you two. I'll give you two that I think are sort of flip sides of this very interesting, some of the topics we talked about earlier. So two books on sort of the past and future of humanity that I recommend. So the sort of culture book is Joseph Henrich wrote the book on what he calls Weird, which is the books called the weirdest people in the world. And Weird is this acronym of Western Educated Industrial Rich and Democratic. And Henrich is like the leading anthropologist of his generation. And he came up with this very interesting breakthrough, which is he came up with a way to actually critique different cultures, but without being called racist, or bigoted or xenophobic. And the way that he did it was he came up with this acronym Weird to describe our culture. So it sort of is this like, basically, this little mechanism that he uses where like, who can get mad at him for calling our culture weird? Like he sounds like he's being self critical. But what he's actually doing is this actually a very interesting kind of x-ray of the process of cultural development, both in the West and elsewhere. It's really interesting. And, you know, there are still huge cultural, you know, differences all around the world that, you know, that are still very relevant that he goes through in a lot of detail. It's probably it's probably the best work of like cultural analysis written since like, probably, I don't know, the 1950s or maybe 1930s or something, because he figured out a way to actually kind of talk a little bit more openly about it. That one I've read that, and I really did not think when I opened it up that I was going to be learning that much about cousin marriage. But it turns out that's a big one in there. Cousin marriage. Cousin marriage. And as okay, so we'll do a minor spoiler. So cousin marriage turns out to be a really big deal because societies that have cousin marriage, basically cousin marriage is the mechanism for the perpetuation of a tribal society. And so the way that tribes perpetuate and basically maintain sort of the coherence of the tribal identity and the tribal kind of feeling, the way that they do that is basically through cousin marriage. They basically have enough intermarriage among cousins where that the tribe sort of the barrier between the tribe and the rest of the world actually kind of remains intact. And then the theory that he goes through is basically what happened in the West that caused the West to diverge, you know, with the Enlightenment and with modern economic development and scientific development and everything else has happened the last 500 years. The thing that caused the West to change and become what it is today is that the Catholic Church banned cousin marriage in Europe, which they did for their own self interest. They did because it turns out they wanted the money in his theory. They wanted the bequests. They wanted the money. They didn't want money. The Catholic Church did not want money to stay in families. It wanted the families to basically fracture over time so that the money would go back to the church. But the result of that was like in his argument, that was the change in human affairs that led to the Europe transitioning from being a tribal society to what it is today, which you could call an enlightened society, you could call it atomized society, right? You know, it led to the rise of the concept, essentially the modern concept of the individual, you know, which is still an alien concept in a lot of tribal societies. So that was really interesting. And then, and then the sort of other, the flip side of that on the sort of sort of genetic side, which is super interesting is this book called Who We Are and How We Got Here, which is this, it's a Harvard professor, David Reich, who runs the leading lab that does this work in archaeology called Ancient DNA, where it turns out we now have the science to basically find like a skeleton that's thousands of years old and the scientists actually know how to extract DNA and actually analyze the DNA. And this is like a huge breakthrough in archaeology for various reasons. And so Reich in his lab at Harvard are actually they're actually building out the actual human family tree. So the actual like genealogy of the human race, and it's why the book is called Who We Are and How We Got Here, like it literally is like the actual genealogy of the human race, like where all these people came from, like how we all got descended from whatever we got, you know, all these questions, and then how all the populations developed and how they intermixed or how they didn't intermix and so forth. And it's just like, I just found that to be like an absolutely, it feels like an X-ray machine into the ancient world through science, through modern cutting edge science that I completely unexpected. Oh, and then I'll add I'll add one final book, which is the best book on Colts that I've ever read, which is also relevant on the same thread, which is The Ancient City, which is this book that was written in the 1860s by this French scholar at the time. And it's sort of a, it's actually, it's actually interesting. It's he used a literary method. So this guy in the 1860s basically apparently read all original Greek and Roman literature, like apparently read all of it. And then he derived from it basically what human civilization was like before the Greeks and Romans. And so what it was actually like to basically be in a tribe or be in a family to be in a tribe or be in a city in like, you know, 2000, 3000 BC, 4000 BC, 5000 BC, and then sort of the changes that happened over time and then what actually happened when Greece and Rome arrived. And if you read those three books together, you, it's like you get this kind of cultural analysis side, you get this archaeological analysis side, and then you get this like literary analysis side of like painting a picture of what the world used to be like, which is not like the world is today. And it's actually been very helpful for me to understand what the world is like today kind of as a as a as a derivation of how people used to live that. Anyway, among other things that The Ancient City talks about is it talks about the critical role of the cult. And the religion basically the cult literally being a religious cult literally being like the foundational model for human lives prior to what we now know as religions. And so also if you find yourself in the business of wanting to create cults, it turns out to be a good book. You know, there are days it's tempting. So, you know, at that same point, 10 years ago, when you were making that prediction, I, a perhaps naive libertarian, had a lot of hopes that these technologies would be would finally be the thing that that took a bunch of our commerce and a bunch of our networks of trust outside of the realm of the state or at least another degree removed. And I think some of that was punctured quite quickly. You know, this idea that Bitcoin is anonymous, right, which was just never true. But I wonder what your what your sense is of whether ultimately there is still that kind of liberatory element to these technologies. Is there is there good reason to think that we will live in a freer society when some of these things come to fruition? Or do we take the the cynical realist's view that the regulators always have their way? Yeah, so I think I was I think economic freedom, I think you can I think you can make that argument so quite strongly political freedom, you know, maybe maybe not. So the economic freedom argument is just like it's the same argument you make on the internet, which is like are people more economically free on the internet than they used to be in an old world of just having stores. And I think the answer is clearly yes, like people have the ability to offer goods and services much more broadly, they have the ability to buy from any more places, you know, transparency of what people are buying is much greater now than it used to be. Reputation matters more than just whatever happens to be on the store shelves. So and then people can transact in categories of goods and services that maybe would have been hard to get at the path. So there's a whole bunch of economic freedom arguments. Is that translated to political freedom? So the state, there's a couple things, right? The state is sort of obvious, but just to make the argument, right? So the state does have like one fundamental thing that's just starting to get away from, which is it just it has territory, like it has physical territory, and then it has the ability to tax on that territory. And so to the extent that you need to live anywhere, you're going to be living someplace and that place is going to want, you know, its share of your economic output, you know, your economic output, and they're going to show up, you know, the tax collector is going to show up when the tax collector shows up, he's going to want to get paid in his currency, right? And so again, by having a body, it precisely, exactly. So the tax, if you're in the US, the tax collector is going to want US dollars. If you're in France, the, you know, the tax collector is going to want whatever currency they use these days. And, you know, so I do believe like there's all these there's all these theories of what makes money money. And one of the theories I like is John Law's theory, which is money is that which people trade with. And the theory there basically is people will always find some sort of thing, some they'll always find some thing, whether it's cowshells or script currency or, or US dollars or diamonds or art or Bitcoin that they'll to trade with, right? So money has facilitation for trading, I do believe that. But having said that, there's another theory of money as money is that which the sovereign taxes you for, because you need to get whatever that is, you need to get some of that, like even if you want to live in an all Bitcoin world in the US today, you at some point need to convert to dollars, because you need to pay your taxes, and the IRS does not take Bitcoin. So, so I kind of think those are both pretty, pretty solid, solid theories. And to me, those kind of argue for both sides of it, which is, yeah, you're going to have systems that are going to facilitate trade that are not necessarily, you know, the US dollar or the euro or the yen, you're going to have, you know, you're going to have online, you know, alternatives to those and you do today. And look, Bitcoin has been trading and being used for transactions continuously every minute of every day since 2009. Right. And so like those are going to exist. But at the same time, like, yeah, we do have physical bodies, at least for now, we're going to be rooted someplace at least for now, the tax collector is going to show up at least for now. You know, look, there's big outstanding regulatory issues right now in the whole space. You know, there's the, and here we're back to the original regulation discussion that we had, which is just like there's tons of push and pull, there's lots of people who think that there ought to be more regulations, you know, in this space, there's tons of people who think that there shouldn't be. There is the theory of enlightened regulation that will achieve the results that the regulators intend. And then there's the reality of the unintended consequences that always come with it. We're back to that exact same, you know, dynamic in crypto regulation, you know, sitting here today, both in the US and around the world. You know, I don't know, like, and then there's like, you know, the other way to think about this is like, you know, this is constant, you know, the surface level kind of question is always just, well, aren't they just going to ban it? Right. Like, so there's banned Bitcoin. And then it's like, well, how do you ban Bitcoin? It's like, it's, it's an algorithm, it's an algorithm, which means it's a mathematical formula. The source code for it is available for free. Anybody can download and run that source code on their computer. You know, for the government to ban, you know, for a government to ban Bitcoin, you know, they literally are putting themselves in a position where they're they're banning math, you know, they're banning code. You know, like, they can try. You know, the level of, I would say the level of tyranny required to ban math is, I think, way in excess, practically speaking, of what governments are either capable of doing or actually want to do. Like the North Koreans can probably do it. I don't know that I don't know that our government would be allowed to do it, even by people who think that this stuff should be regulated. So, yeah, you know, look, I think there's going to be some running room for, for innovation here. I also make a very different argument on this, which a lot of people respond to, which is, look, this is a technology of the future, like at least like those of us who are those of us whose day job it is to like identify technologies of the future and help build them. Like this is one of those like everybody, everybody in my world who's like really smart, they're all like, yes, this, this is a technology of the future. What exact shape it takes is open question. But yeah, this is a big deal. 30 years from now, this is going to be a really big deal. And a lot of the smartest computer science people in the industry have gone into this field and are working hard on, on making it work and making it better. And then it's like, okay, if there is going to be a technology of the future, would it's back to the question you asked about AI earlier, would we like that technology of the future to be built and developed and housed in the United States? Or would we prefer to drive it offshore, you know, to other places? And actually, Rishi Sunak in the UK just came out with a very supportive statement policy basic, essentially saying if US regulators don't want this stuff in the US, that everybody's just come to the UK and do it there. You know, so maybe there will be an escape hatch to the UK, you know, maybe to Singapore or someplace like that. You know, or, you know, I don't know, or people will come to their senses and understand that they should, you know, this is something the country should actually invest in to support. Or there will be attempts to regulate it and they just won't work well. And I don't know, you know, those are all open questions right now. This was great. Is there anything else that you think reasoned readers should know? What message do you have for the people? Yeah, I mean, I don't know. Here would be something. It's just like, yeah, I would say try the new stuff, right? Like, try it, right? Like, you know, find a friend, you know, find a friend who has a Tesla and, you know, take it for a ride with the full self-driving turned on, right? Or like, you know, log in to chat GPT online and like, log in and create an account and try it. Or, you know, try a, yeah, just like, you know, these things, you know, these, most of these things, you can just like go online and either for, you know, trivial amounts of money or no money, you can just try and use. Um, I just, you know, there's, there's just oceans and oceans and oceans of commentary on this stuff. You know, not, not, not you guys, but like from lots of other sources and like, for most of it, it's just, you know, we, we out here just like shake our head being like, these people haven't even used this stuff. Like that, you know, they're, they're debating shadows on the cable wall. You know, this is a long standing point that we make about guns. There's a lot of people who have never shot a gun, who have a lot of opinions about guns. Every time you read about like, you know, automatic pistols, right? It's like, you know, it's like, okay, go shoot a pistol. Hold the trigger down. Let's see what happens. Um, yeah, exactly. So, um, yeah, I look the tech industry, the tech industry used to be top down. So the tech industry used to create products for like the government first and then big company second and then small companies third and then regular people fourth. And in the last 20 years, that's reversed the tech industry. Now everything new that matters comes out for people first. It's, it's, it's the consumer markets lead. And then it's like years later, big companies and governments start to figure out how to deal with this stuff. And so anything new and interesting is something that you can literally just log on and try. And it's all, yeah, the water's, yeah, the water's warm on this end of the pool. Like the stuff is actually, stuff is actually pretty cool. Thank you for talking for a reason. Yeah, awesome. Thank you for having me.