 I think the truth is we understand a lot less about how the things around us work, and understanding them does not mean you can replicate them." It goes without saying, but let me do it anyways, that the world is an incredibly and increasingly complex place. Modern life is built on a fragile web of technologies that make things easier by taking the need for expertise out of our hands, allowing each of us to use more of our time the way that we want to. Think about navigation, the simple act of getting where we want to go. Throughout history, this has been a substantial challenge, whether you're talking about navigating by the stars or even just reading a map. But today, most of us are using Google or Apple to do that for us. We tell it where we want to go when they give us step-by-step directions to get there. On today's show, we're going to talk about that reality on a grand scale, and what it really means when we say and we didn't coin this phrase, no one's driving. But before that, let me introduce myself. I'm Adam B. Levine, and this is Speaking of Bitcoin. As always, I'm joined by the other hosts of the show, Stephanie Murphy. Hi there. Jonathan Mohan. Hey, hey. And Andreas M. Antonopoulos. Hello. So Stephanie, you and I have been talking about this a pretty decent amount. Once you get us started here, what are we talking about? So I was reading a series of articles a while back by Tim Maughan, who's a tech writer on Medium, and it's called No One's Driving. That's like his phrase. And I just thought it was like an interesting thing to talk about because he's talking about the kind of systematization of the world that frees up individuals' time, but it ends up building these networks of systems in some cases that are interconnected and layered on top of each other that are just too complex for any one person to understand, really. And so if something breaks, obviously, the caveat with that is who's going to be able to fix it and who's going to even be able to understand how it's broken because we were increasingly relying so much on artificial intelligence and these really complex algorithmic systems of running things. And as examples of this, he cites things like not only the financial industry, which is what we talk about on the show, which has tons of botan algorithms, whether you're talking about traditional finance or cryptocurrency, executing thousands of trades per second in some cases. And does anyone really understand that or can we wrap our minds around that as humans to the global shipping and supply chain, which is actually connected with the financial industry too? But for example, the author says that he spent some time years ago investigating how do supply chains work and living on a container ship. And he said the captain of the ship was getting these emails that told him to slow down the ship or speed it up. And they were automated emails, right? And he didn't know like there was no one really in charge of that, but he was following the algorithms directions unquestioningly. And you can assume this was like a couple of years ago, so it's only gotten more complex since then. And of course, we saw just this March, the Ever Given, which was a container ship that became lodged in the Suez Canal, blocking one of the major shipping routes around the world and the ripple effects that had on global supply chains. And still to this day, like the ship has just departed the canal, I guess like just this week, because they finally hammered out a compensation agreement. And meanwhile, there were people that were kind of stuck there. You know, it just shows like how kind of fragile these systems are and how really it's true no one is driving. And so is this a problem? Like, is it the same as kind of decentralization and like free markets? Or is it like a dystopian algorithm that nobody really understands that's controlling everything? I thought we could talk about that. I find it fascinating how all of these things are kind of related to some fundamental concepts in mathematics and computer science that have to do with our ability to predict systems. These systems are based on rules, they're mostly deterministic systems. And yet we know from Goedl's incompleteness theory as well as chaos theory that we cannot predict complex systems that have feedback loops. And in some cases, our inability to predict these is based not simply on complexity. So faster computers doesn't solve the problem, there are some things that simply cannot be predicted. The question is, do we really need to predict and understand these systems? Or is it sufficient that these systems can be tuned and dynamically adjusted to balance out? And I think that's a really important concept. In fact, is one that's been played out in science fiction a lot. You often have this trope of modern person going back in time and then trying to rediscover discoveries or reinvent inventions that they learned about. So if you were dropped in the middle of a medieval village, how could you use the knowledge of a modern person in order to improve life for yourself and others, or maybe even from an ego perspective, seize power and appear as a magical god among your fellow humans? And the truth of it is as much as we like to think that our high school education maybe in physics and chemistry and things like that would allow us to rebuild modern civilization from scratch. I think the truth is one, we understand a lot less about how the things around us work and understanding them does not mean you can replicate them. And the other aspect of it is that these systems do not exist in a vacuum. Many of the things we have today depends on these vast webs of logistics and supply chain to bring different materials together and then process them in certain ways that build up complexity over many, many thousands of steps. And you can't build all of that from scratch. It takes an entire planet working together over perhaps decade centuries even. Yes. When you were talking about a person going back in time to the medieval times and trying to recreate modern life, I was just thinking like, could I even begin to create my own cell phone or computer? And I was thinking about what would I need to do in order to do that? And then I was just like, yeah, there's absolutely no way. Forget even cell phone. Think something much simpler, a clock. A pencil even. This is what I was thinking of. Nobody knows how to make a pencil. And there's a favorite essay of mine by Leonard Reid, you know, which I think was written in the 1950s. It's called I Pencil. And it's written about how no one person knows how to make something as simple as a pencil. I mean, like the graphite comes from Sri Lanka and the wood comes from some other forest, and then it all has to be coordinated to be put together and painted and shipped. And it really requires this almost magical seeming self-organization of, you know, something like a free market, which is what Leonard Reid is writing about. But I don't think these complex systems that we have today are really like that free market that Leonard Reid was writing about. I think they're more of a top-down type where we're relying on these algorithms that are at the top that no one really understands, but everyone is sort of obeying. So that's what I really wanted to talk about with this topic. Yeah. And the systems that we don't understand are becoming less easy to understand. Until about a decade ago, many of the systems that we built were rules-based systems. They had some kind of analytical process, basically a series of if-then-else statements. And with those systems, we can see how one thing leads to another. Increasingly, though, over the last decade, with the introduction of more and more so-called AI systems or machine learning systems, we train a neural network on some data set. And through these deep learning models, they are able to perform certain tasks. But the problem is that the biases of the data set infect our machines. And we've seen this happen again and again, where various biases that existed in the data set because the data set was collected and curated by humans in the first place show up in our machine learning models. Yes, like we've talked about on the show before, the example of these Amazon hiring algorithms that were skipping over women because there weren't women hired before in the previous data set. Many who work in computer science ethics talk about implicit bias around race, gender, nationality, language, and so many other things. But the secondary problem here is that we can't look into the model and understand how it's making decisions. We are now losing gradually the ability to do introspection of these systems. We can't see how it works behind the curtain. Because what works behind the curtain is simply these massive numeric matrices of weightings on a neural network that were developed by deep learning. And we don't know how the answers emerge. It's impossible to really understand the inner workings of these systems. So the only way you can discover edge cases or biases or the points at which they fail is through experiential exposure. You basically run it until it fails. We're seeing this in things like self-driving systems, for example, where you don't know how it's going to react to a circumstance that hasn't happened yet until that circumstance happens. And you can only test so many things. Reality has too many free variables. And so not only are we building systems that we don't understand, we are now building systems that cannot be understood. And these systems themselves do not understand how they reach conclusions. Right. And this is the whole thing also about the fourth industrial revolution. The pace of technological change is accelerating. Right. And so you combine that with these systems that now are not even understandable. And is there going to be some kind of crash, just like with the self-driving cars? Are we going to find out the hard way that there can be problems that can't be fixed because nobody understands them? And I'm curious, like from a social perspective, is this leading somewhere like to a movement to simplify our lives or to get back to understandable technology? Like, are we going there eventually? I think yes. And I think that maybe we already are, or some of us already are. I think not. I think we've seen in every successive generation of technology, this concern that the technology is progressing too fast and our ability to absorb it and survive as a species is progressing too fast. And we've seen that, nevertheless, through crashes and disasters and failures of technology on a massive scale, again and again and again, is that somehow we make it through. Now, maybe next time we're not going to be so lucky. But I think all of the movements that have tried to go back to a simpler past come back to one fundamental problem, which is that the present we have today is the only status or the only system that scales to allow our species to survive at the level we survive on this planet at this scale. So going back necessarily means not being able to support the species on this planet. You can't take the technology of 100 years ago or even the technology of 30 years ago and use it to sustain a 7.8 billion number species. And you can't use the technology of 30 years ago to survive catastrophic events, for example, a pandemic. So I don't really see a practical option to do that if we as kind of developed Westerners have this romantic appreciation for a simpler past. The problem is that that kind of is a position of privilege because for all of the nations that haven't developed their economies and their quality of life to the level we have, they're like, well, why not give us a chance to go there too? I'm not talking about becoming a Luddite. I'm just saying that, you know, for example, in some areas of life, maybe it is possible to cut through the overwhelm and to get back to something that is graspable, right? Like, for example, we all know that the internet is overwhelming or can be overwhelming. It's a fire hose of information. Social media and search algorithms like on YouTube and other search engines, you know, are sometimes able to shape our thoughts. And so, you know, you can choose to consume less of that, perhaps, and maybe it makes things a little bit clearer when you push some of that overwhelm to the side. Yeah, I was finding it much easier to attack the straw man argument of you being a Luddite, Stephanie. So now you've made it a bit harder for me. But I'll agree 100% with you because, you know, I think we all find ways to do that. For example, one of my rules for vacations is either massively reducing my connectivity or eliminating it completely. So no device vacation. And that's a welcome opportunity to get away from it all. I think we can find moments like that for sure. Okay, so hold that thought. We're going to be back in just a minute after the break. And we're going to talk about all of this plus a Douglas Adams quote that's one of my favorite, which speaks exactly to this issue, and then kind of take it into both tangible real world terms on both the low technology side and on the high tech scary sci-fi technology side. Back in a minute. So Douglas Adams said, I've come up with a set of rules that describes our reactions to technologies. One, anything that is in the world when you're born is normal and ordinary and is just a natural part of the way the world works. Two, anything that's invented between when you're 15 and 35 is new and exciting and revolutionary. You can probably get a career in it. Anything invented after you're 35 is against the natural order of things. Love Douglas Adams. He has so many good quotes, but that one's really good because it talks about this idea that as complexity escalates, at some point, we will hit a point where it's just too complex and we can't deal with it. And what this really kind of, I think, does a good job of illustrating is that it's not about us as individuals. We as individuals have the different technologies that we ingrain to, that we are native to. But as a civilization, that problem doesn't happen because people are constantly being born and so we're constantly in this state of having some large proportion of the population that is very comfortable with the new technology that feels native to it. And so it allows us to kind of continue that. Now, if you ask my mother to put together something on Google Docs, she's probably not going to do it. She's more of, you know, like the 1990s gigantic brick of a laptop type of person. But I am completely native to that. But if you ask me to write something by hand, you're going to get something that looks like it was scrawled by an idiot because I can't write to save my life, but I type really fast. So you get this kind of interesting generational nativity, if that's the right word, it's probably not that I think does kind of patch into this. What do you think? Sure. For all of history, the Douglas Adams quote is totally true. Everyone has been saying, these kids today, I just don't understand their technology. But I think that if we combine this with looking at it through the lens of the Fourth Industrial Revolution and the accelerating pace of technological change, it feels to me like maybe we are at kind of an inflection point where in the past it was possible to kind of get with the program and catch on more than it is now. Because from this point forward, things are just going to be changing so fast that our human brains are going to have difficulty like catching on and understanding. And maybe we don't need to understand everything. Of course, we don't need to understand everything, right? Like that's what we were saying at the beginning of the show. The benefit of these technologies and these complex systems is that they free up people's time to not have to worry about understanding how to manufacture your own cell phone or how to grow your own food or how to make your own clothing. Of course, like if you did all that yourself, you would be spending your whole life just taking care of basic needs and then no progress would happen. But I guess it's just like it's uncomfortable to think about that there are systems controlling things as important as global shipping chains and supply chains, especially when you consider the context of the pandemic and the last year and the financial system of the world, the traditional financial system, as well as the crypto financial world. And there are very few people or maybe no one that understand them and certainly no one that understands the whole thing fully. It's uncomfortable. And I admit that. I like the idea that different generations within our species are experiencing technology from different perspectives. But I think it's also true, as I believe William Gibson said, that the future is already here, it's just unevenly distributed. Yeah, that is Gibson. And so different parts of the planet are experiencing different parts of technological advancements too. And I think in the end that's the saving grace, which is we're not a technological or futurism monoculture. We have massive amounts of decentralization of experience, both generationally and geographically, that allows some people to beta test stuff on a small scale before it's rolled out on a bigger and bigger scale. And by the time it is rolled out on a bigger and bigger scale, a lot of testing has happened. And I think that keeps us safe from the kind of catastrophic collapses that one might imagine by rogue systems with unintended consequences. But then again, I'm always a techno optimist. So if the AI becomes too complicated for us to understand, I think what we do is we build an AI governor to run it. Are you joking about that? I hope so. I'm totally joking about that. Okay, good. That's a parody of techno utopianism, which is any engineering problem can be solved by a slightly more complex engineering solution. Did you guys hear about this infamous fat finger trade where somebody pressed a wrong button or okay, here's an article I found about it. City group has officially lost the court battle to recover some $500 million that was accidentally transferred to investors in a Revlon debt deal. And it was called the fat finger trade because it was basically a mistaken transfer and they were going to court to try to get it back, but the court said they can't get it back. Usually when something like this happens, if it happens to an institution that's big enough, they can get a bailout from the government or perhaps some help from the established financial system. But do you think that increasing automation in these complex systems prevent things like this from happening? Or is it because of these complex systems that something like this fat finger trade happened? Well, I think it's indicative of how little people in the Biden administration City Bank has that the ruling went the way that it did. It's unfortunate that they weren't Goldman Sachs or any of the other institutions that are well stocked on K Street and in the White House. So if anything, the answer to the question, how many lobbyists does City Bank have in Wall Street? And the answer is not enough, clearly. That is wildly cynical, Jonathan. Thank you. I appreciate that take. So on the subject of cynicism, I've been paying attention to drone technology for a long time, and there are a lot of really cool things about it. We've had a conversation a number of times in past episodes, although not particularly recently, about this idea of users on one side versus operators on the other side. And you can think about this in a car. If you are driving a car, you have to know how to drive a car. We're not even talking about stick shift versus not, but you have to understand the laws of traffic. You have to understand how to get from where you are to where you are going using the tool because you are operating a car. If you are a user of a car or any service for that matter, effectively what you are doing is you are imagining a driverless car. Instead of knowing how to operate the car, all you need to do is just indicate your objective and then the technology figures out how to make it happen. So you don't have to know how to use the car. You just have to know how to tell it what you want, and then it will figure it out for you. When you're talking about drones versus conventional aircraft, it's much the same thing. A drone, these hexacopters and quadricopters effectively fly themselves. And when you're using one, what you're doing is you're telling it, I want you to go over here. And then it's figuring out all of the complex aeronautics, which motors it needs to spin up, how exactly it needs to angle in order to accomplish that goal. And this was brought into sharp contrast with where we are right now with military technology. For a long time, drones have been very attractive to use because they don't put human lives at risk for the people who are deploying them as a weapon. So earlier this month, the Israeli Defense Force, or the IDF, as they are called, deployed an AI-controlled swarm of drones as a way to track down insurgents or terrorists within certain parts of an area that they were operating within. We're not going to get political about this. This is not about that at all. But what it is about is this idea that you can take these systems and you can actually put them into life and death situations. And then you can basically say, well, actually, this is an improvement because effectively what they've said here is that one person controls a swarm of drones by indicating where they should go. And then the drones communicate amongst each other to share intelligence, to share what they can see, to then figure out who are the bad guys. And it is, again, nominally controlled by one person who then has a commander standing next to him in case any hard questions come up. But they've also said in this that if the connection is lost, someone jams the connection or if the location is destroyed or something like that where the command operates, then the drones can basically act autonomously. And so this is very much a first in terms of a public acknowledgement of drones and AI being used in a warlike fashion to actually deploy weapons. In this case, they were spotting for mortars. But again, it's a very small jump from where we are right now with this technology to, and there's guns on these things, right? Or the thing is a bomb. And we've seen stuff like that in the past, although typically it's been someone actually flying it rather than it being AI controlled. So again, I think this just puts it into really sharp contrast. Like when we're talking about these autonomous systems, autonomy means that it does not rely on a human, which means on the one hand, you do not have humanity involved at all. And on the other hand, you don't have any sense of accountability, right? If a drone does something wrong, who goes to jail for that? Like, is it the programmer who designed the software in the first place? Is it the person who deployed it to the environment where it then did something that was illegal? Like, how do you even kind of go down that path? I don't know the answers to any of these questions, but I think that in this topic, this is kind of the end goal, right? We talk about Skynet every once in a while because it's hilarious. But what's different? This feels like that's it. All right, Adam, what we're going to do is the first part in a 27 lecture series on Kant and moral agency. So I look forward to doing part one of 27 with it. Thank you, John. I do want to say that what you're saying reminds me very much of my second favorite onion article, which is in January after the Trump election, the onion put out an article of a predator drone leaving a dead child in front of the steps of the White House. And it said, predator drone leaves Iraqi child at front step of White House hopes it brings back its master. And it's because these things have been going on for a decade, they're called signature strikes, the Bush and then Obama administration indiscriminately killed people on the basis of algorithmic parameters and didn't even give the kill order in the execution of it. If you recall, when the Obama administration killed, and I'm not being political because it's Obama, it's just when that's who did it, killed that American without due process. And then a week later, killed his 16 year old son. And the White House's official defense was that they so indiscriminately kill people on the basis of a signature strike, they weren't aware that this non combatant 16 year old child that they murdered happened to be an American and happened to be the child to the other American that they indiscriminately killed the week before. And that was the defense of signature strikes. This is not a defense of it, it's a condemnation of it, but this has been happening for over a decade. So from a complexity standpoint, again, like it feels as if the arguments in favor of this have to do with this escalating arms race that we have. And when I say arms race, I'm not even talking about weapons or talking about war just generally, just talking about the kind of rush of technology to give us additional capabilities that provide advantages in whatever situation that we are in, even if it comes at the price of complexity that we can't predict. And frankly, aren't even trying really to control. You think about this and like draw it all the way back to, you know, like the move to automatic transmission in a car versus manual transmission, right? Like I know how to drive a manual car only because I had a manual car that I had to learn how to drive. Otherwise, I would not know how to drive a manual car. And most of the people who I know in my age group do not know how to drive a manual car. That's not that different in terms of this removal of agency, right? But it's so narrowly focused to the determination about when the car should shift its gear up or down, right? Taking that control out of someone's hands feels like there are probably repercussions that could come of that. If the decision is made incorrectly, then it could have terrible repercussions. Imagine if an automatic transmission decided that, hey, the best idea right now while I'm driving on the freeway is to shift into reverse. That would be a very, very bad outcome, but it would also be a very localized outcome. And it's also a much simpler system than much of what we're talking about here. And so in practice, you don't really see anything like that. We know how to automate that without having adverse effects. But where is the line between on the one side, the drones that indiscriminately killed based on the signatures that you're talking about? And on the other side, well, it's okay for logic to determine when the car is going to shift. How do we find that line? Yeah, this is where I think we can benefit from some simplification, which allows us to find balance. And I'll explain what I mean by that. When you were talking about these weaponized drones, Adam, it terrifies me. But when you were talking about this, I was thinking of this saying that some people love to say, which is that guns don't kill people, people kill people, right? You have to have a person who's operating the gun who makes the decision. But in this case, I think there are some who would want to flip that saying around and say, it's not drones don't kill people, people kill people. No, the drones really do kill people, right? Because that lets them deny responsibility. But no, if we simplify this and say, okay, a person or groups of people had to come together to make this technology, which then kind of get its own life and maybe became autonomous, right? But it started with people, right? And people do have the moral responsibility for what the technology they created does. And as long as I think going back to the stick versus automatic transmission car example, Adam, that you gave, you know, earlier this year, I drove up Mount Washington in New Hampshire. And this was really cool. But the thing is, when you're going down the mountain, you can't just rely on your automatic transmission. You have to kind of shift the car into the lowest gear possible. Otherwise, you're going to burn out your brakes, you know, and it's better for the car and it's more efficient if you go into manual mode and you shift into the lowest gear as you're going down the mountain. And so in that sense, when I had to do that, it was like one of the only times I've had to use manual mode in my car. But I was really glad that I had that mode and that I understood it enough to understand why it was important to shift into a low gear going down the hill. And so I think that as long as we have a little bit of basic knowledge, we can use to turn into basically common sense. And as long as there's someone who understands the system or at least parts of the system, and they can put their knowledge together with people who understand other parts of the system to kind of collectively wrap our heads around it, we're going to be fine. And that's what I mean about finding balance. Like, obviously, it's scary and dangerous if these systems are completely ununderstandable and it's like we don't even know where to begin. But if there are certain specialized individuals who understand at least like some parts of the system and they can put their heads together with others who understand other parts, we can probably still keep a grasp on the technology and repair it if needed or if things go wrong or steer the direction of where it goes in the future. So I promise at some point in the near future, I am going to stop talking about inflation so much, but I have to bring it up here again. So last week, we talked with John Williams, the economist from ShadowStats.com, which basically continues tracking inflation and unemployment and GDP and all these other statistics using the original government methodologies that have changed over time. This allows you to make apples to apples comparisons between the data that we saw in the 1970s, a period of historically high inflation we are told, and today. And spoiler alert, for those of you who didn't listen to that one, inflation today is basically as high as it was during the height of the 1970s if you use the same methodology. And the only reason why we don't actually see that today in terms of the official measures is because they have changed the way that it is measured, but they continue to pretend that it's an apples to apples comparison when actually it pulls about 8% inflation out. Okay. So the part about this that is interesting to me is that if you look back at the rationale or the reason that led to this distortion in the way that we measure these statistics, it actually comes back to this idea of automating cost of living adjustments for the purposes of social security and government pensions. In 1972, the US Congress passed a rule basically that made it so that cost of living adjustments, which are intended to keep the value of the money that someone is being paid if they receive something like social security from degrading as inflation eats away at the value of the money, they basically automated that in 1972, it went into effect in 1975. The reason why they did that was because if you didn't do that, then Congress had to continuously vote on this stuff and with inflation running high at the time, the thought was essentially that it's a losing proposition either way, either we don't give them a cost of living increase and people complain that their value that we are supposed to deliver to them is degrading, or we do give them a cost of living increase. And now we are voting to spend more money that the government doesn't necessarily have on this entitlement. So by automating it, you effectively take that out of control of Congress and you make it so that it's just something that happens based on the technology. It happens based on what's the best data that we have. That's what should determine it, not people. So now that worked for a couple of years, but you get to 1982, just seven years after this went into effect. And the compound result of that change is that payments to social security recipients had gone up by 94.4% in just a seven-year period. So the way that they solved that problem, which obviously was a problem, basically they were faced with a choice that they couldn't get around. On the one side, either you unpag it, you un-automate the system, which is an acknowledgement that it's giving people too much money because inflation is running high, or you let it continue to be automated. And the automation winds up sucking up all the money that is coming into government to spend on many things of which social security is only one impossible decision from a political standpoint. So instead, they cheated. They changed the way that inflation was measured so they could pretend that the automated technocratic approach was still actually performing its function when in reality it was stealing money from people in a compounding basis over the next 40 years. As you can tell, this is a little bit of a pet peeve for me right now as I've been digging into this. But that, I think, is the other thing is that even in these automated systems, even in these systems that are designed to take the control out of our hands, what we see is that when it really matters and when especially there are politics involved, what happens is they just fake it. They just change the rules without telling anybody, or they just change the way that things are calculated to get the result that they want anyways. While pretending that it still is this technological or other basis, some type of expertise that is impartial, when in reality it is simply the guise of impartiality that is being put on top of a problem that is politically challenging to solve. And that right there again gets me back to this whole drone thing. The removal of responsibility from the people who are operating these systems, irrespective of whether they pull the trigger or not, is the most dangerous thing that I can imagine. I realize I've taken us down a dark path here, but I mean, that's my big concern about this. It's that on the one side you get the technocratic stuff, when it is the result that they want. On the other side, you get the whatever it is that they want by whatever method is necessary to generate that result. So it's a lose-lose basically for people who are just trying to play by the rules, irrespective of the technology you're talking about. I don't know. I think I'm going to answer my own question here and say that I'm really comfortable with technology playing minor roles in systems that we can understand and that we can see. But as we get deeper into this stuff, it is just like, there's so much that could go wrong here. There's so many problems that could emerge and so many ways to obfuscate who is truly responsible that that lack of accountability and the damage that I think is done to the process of accountability makes me very, very concerned. Yeah, I agree. That's why I think we need human oversight and human accountability even if we automate the nitty-gritty processes to technology, there needs to be that human element in there keeping it human. Yeah, which is why I fundamentally have a problem with Bitcoin's distribution stream. What we need is a person at the top. An algorithmic distribution scheme is just too dangerous for something as important as money. Tend this on a positive note. I think this will be truly the golden century of dystopian sci-fi, so lots of good books to read. But I do actually think that we haven't delved in this topic as deeply as we can, which is the consequences of outcomes with a moral agency with this concept of sufficiently decentralized and all the good ways that it's being applied, like with Bitcoin, and all the horrible ways that this concept can be applied. And thinking about like, great, sufficiently decentralized in the context of money means no one's responsible for the money system, but also at what point could that concept and that structure be applied to a valid re-implementation of a Nuremberg defense? Yeah, you're raising a really good point, Jonathan. I'm glad you brought this up because I wanted to point out that the series that inspired this show for me was No One's Driving. He wrote a whole series about it, and at the end of the series, it was like a really bleak ending. It was basically like, well, we're all screwed. We can't understand what's going on. Our world is being governed by these unknowable algorithms, so I guess we should just give up, you know? And I didn't want to accept that conclusion. Like, I really wanted a happier ending and I was hoping maybe we could bring some more nuance and positivity to this concept. So maybe we can talk about this again in the future, but I thought it was a good conversation for now. Yeah, I agree. That was a good conversation and certainly a topic we'll be revisiting. It seems like something that won't be going away anytime soon. That's a wrap on another episode of Speaking of Bitcoin. Today's episode featured Andreas M. Antonopoulos, Stephanie Murphy, Jonathan Mohan, and myself, Adam B. Levine. This episode featured music from Jared Rubins and Gertie Beetz. If you enjoyed this episode, send me an email at adamatspeakingofbitcoin.com or just leave a review on your favorite podcast player. And until next time, thanks for listening.