 Boom, what's up everyone, welcome to Simulation, I'm your host, Alan Sockian. Super excited to be talking about centrism, nuance, and so much more. We have Will Papper joining us on the show. Hello. Happy to be here. Thanks so much for coming on. Absolutely. Super excited for this episode. For those that don't know Will's background, he's a product manager and philosopher focused on how new technologies can help or harm society. You can find the links in the bio below to Papper.me as well as the LinkedIn and Twitter profiles. Will, let's start things off with one of our favorite questions to ask our guests. What are your thoughts on the direction of our world? Thoughts of the direction of our world. So I think we're at this point where we have this debate between are we in a point where we're stagnating or are we in a point of optimism? So some people say, hey, we have self-driving cars coming about. We have incredible innovations in computing power. We have machine learning. These are all amazing things that will dramatically change society. And then other people say, we've already peaked in our productivity. We're starting to hit this point of decline. Robert Gordon, for example, wrote The Rise and the Fall of American Growth. His thesis is that since 1970, we've actually been declining in our productivity. And that 1870, 1970 was the peak period of productivity in U.S. history. So it's something where I think people's views right now in the state of the world depends on whether you have this optimistic, pessimistic approach just with respect to if things are changing in great ways or things are leading to stagnation and turmoil. I think that's where a lot of the current divisions in society are coming from. And it does seem as though that the disconnection that we have from the ecosystem that sustains us, this planet, how it gives us the air, the water, the food, the resources that we need to live, that the disconnection that we have from that is directly reflected in the society's fabric that we live in, how dysfunctional some aspects of it are. Do you feel like because of that disconnect to our ecosystems that we live in is a reason of the dysfunctions? Yes. I think that because you have people with two dramatically different worldviews trying to make decisions on the same things where if you have an approach of stagnation, your view on, for example, universal basic income will be very different or your view on global warming will be very different compared to an optimistic approach. And I think that that is where a lot of the crosstalk happens, where people miss each other's points because they have these different worldviews that they're coming to the table with. And I don't think that's acknowledged enough. Yes, yes. Okay, so then maybe one of the strategies then would be to unpack people's worldviews in nuanced driven ways and to do it where you can take a complex issue like the ecosystems that we live in and how they're being dysfunctional because we're disconnected from them or something like a universal basic income to solve some of the wealth and equality issues that we have. And instead of being extremely polarized on it, how would you recommend people to have a more nuanced driven conversation? I think that the reality is that if any, the thing that I learned from philosophy is that if anything seems simple, you're probably missing a lot. That anything simplified probably is, probably is missing crucial components and extreme simplification is not the way to go. So I think that if you think you're totally right, you should probably reevaluate. That's kind of my heuristic of these tradeoffs are very complex. No one really has the answers to the stagnation versus optimism question, let's say. So that's what I use as my own personal marker for myself is if I think I'm right, if I think I'm completely right, if I can't think of arguments against what I'm saying, I'm probably missing something. Yeah, I love that. That's a really good way of putting it. Yeah, I love that. That's a really good way of putting it is that if I can do things like take my perspective and also hold simultaneously the most contrasting worldview and then be able to see both of those at the same time, then that can be really helpful. And that's a good heuristic to is to constantly be looking for that the opposing side of the argument and also the other thing that you mentioned at the beginning, which was... Simplification. Oh, reductionism, reductionism, that this is a good one. And I think we're actually going to revisit this later as well, but just reductionism can lead us to oversimplification, which then leaves out the nuanced details of specific subjects. Yeah. Okay, let's get into the trajectory. So you're born in Manhattan in New York City. You stayed there until you're 18 and went to Wash U, and then you transferred to Stanford. And you did philosophy there. I want to get into your earlier years. You got interested in entrepreneurship through gaming. Teach us about building the online gaming community. Yeah. So yeah, when I was 14, I set up a just online game server with some friends, and it was basically like we were playing Minecraft and doing whatever else we did at 14. And it was just, we just told some friends, they told some friends, and almost entirely be a word of mouth, we grew to around 5,000 people. And that was incredible. There's actually, literally people met, got married, had kids after meeting on our game server. Whoa. Yeah, yeah. By the way, whenever people ask for like your fun fact, my favorite fun fact is like I've indirectly led to people having a child. Yeah. They met on our server. And being able to have that kind of impact on people's lives was incredible, and being able to just scale something based on word of mouth was incredible too. And I mean, we never set out to create like a giant server. We just set out to create a fun place with our friends. And that kind of made me realize like, oh, you can do something you enjoy. You can make something that impacts people. And you can also hopefully, that didn't make money, but you can also hopefully make enough money off it to sustain yourself or to make it profitable. So that just showed me, whoa, like a bunch of friends just working together can scale something to 5,000 people and lead to like someone meeting and having a kid. That's incredible and that got me hooked on technology, entrepreneurship in general. Yeah, you can have a purpose that you find really meaningful and then be able to provide people with value and also being able to sustain yourself over time. And then what else were some of these things that happened to you along the way up until doing library and AI, helping people find expertise faster with AI and NLP? Yeah, so back in high school, so I founded the online community and then there was a group at the MIT Media Lab that was building a decentralized mesh network for local communities. And they were like, hey, we see you have experience with building these communities. And I also knew how to code Java. They used Java as their programming language. I coded some Android apps. They used Android. So I had gotten in touch with them and I was able to do research at the Media Lab when I was, I think, 16 at the time, which was an absolutely incredible experience to shake my worldview. And it was because what we were working on was this technology with pretty incredible potential. The idea was that you could have phones communicate directly with each other, so you didn't need any cell service at all to be able to communicate. That works in stadiums. That works in disaster areas. That works in all sorts of ways. But the privacy trade-offs, for example, are really complex in those technologies. There's all these questions of how do you verify identities? Do you want strongly verified identities? And then people can't speak anonymously. Do you want to allow people to speak anonymously and know that that can come with some difficulties in building a good community? And that got me really interested in these privacy technologies in the first place. And blockchains, one example of them, mesh networking, is another. I think there's a lot of potential for these technologies that can do both a ton of good and a ton of harm depending on their use. And that's why I think it's really satisfying to approach them from a philosophical perspective and work on them. And so that's where you're heading into product management and digital authentication and also doing the Kleinerberg and Product Fellowship. Yes, that's correct. I like how you taught me that privacy enables the evolution of ideas in society. It helps society evolve. I think your example with the LGBTQ community being able to gather in small private settings and grow, I think that's really important. Whereas if you had that kind of Orwellian style of oversight on people congregating could prevent them from society evolving? Yeah, I think privacy, fundamentally, anything about privacy is a trade-off between privacy and security. And I don't think those are easy answers. Like WhatsApp can be used by the LGBT community. It can also be used by terrorists. And I think that those are really, really, really hard questions of where do you strike the balance. But fundamentally, I don't know where the balance should lie. But I do believe that privacy helps people gather, shift norms slowly. The progress we've seen in civil rights, a lot of that comes from the ability to hold these, like hold these views and attitudes in private and slowly share them with trusted close friends until they become public ideas that are accepted. But the security trade-off is always really, really tough. And different countries take different approaches. Different people take different approaches. I don't think they're the right answers. But I think that there is definitely important value in both. Yes. Yes. Yeah, this is a super nuanced view on even something as simple as a messenger app, like WhatsApp, with being able to be used to congregate people to evolve societies or to be used to cause harm on others. Like mesh networking, because it never touches a central server. There's no central logging. But you give people communications that can work anywhere. That's a fantastic, fantastic capability to be able to literally communicate with your neighbors without needing to pay for an internet connection, have an internet connection available. But it also offers mostly completely untraceable communication. And is that something you want to enable? Mesh networking community is a good example of that because logging is really important for solving crimes. Do you want to forgrow that capability? Do you want to find a balance? Do you want to offer privacy-preserving, like central nodes that people can opt in or opt out of? But ultimately transparency in these trade-offs is what really matters. Ooh, that's a good one. The transparency definitely matters a lot when you're kind of born just into the black box and you don't necessarily know exactly what are the things that are governing the world that you engage with. Versus if it's transparent at least and then you can engage with it, maybe augment it, maybe try and gather in private communities to build things that can obsolete it as well. Yeah, yeah, yeah. Okay, let's do... Okay, this is good. So we're on a philosophy edge. Is this cool? You studied Latin when you started in 11. And that it's not only essential like history to learn, but it's also beautiful on the poetic side of things. Now, what is the difference between moral and political philosophy? Yeah, so I see moral... So political philosophy is fundamentally a question of how do we organize society? What obligations do governments have to their citizens? How should people cooperate with each other? How should we approach equality and inequality? And then moral philosophy in my mind is how do we hold people responsible within those systems? So for example, should someone be held responsible for the actions of their family and friends? Or should we not hold people responsible in the case of accidents, let's say? Those are tricky questions. For example, if you know that someone is doing a bad thing and you don't report it, should we hold that person responsible for the consequences of those actions? That also leads to a really interesting concept of moral luck. So moral luck is the idea. So let's say there's two people who are driving drunk. One person gets into a terrible accident that injures another person. One person drives home just fine. Is it fair to evaluate those people based purely on the consequences? They committed the same action. They both chose to drive drunk. In one case, it led to a bad consequence that harmed others. In another case, it didn't. But moral luck is like, should we punish the drunk driver who didn't injure someone less just because it happened to work out okay? Should we punish the other person more for their accident? Or should there be a system that prevents the driver from driving in the first place because they're blood alcohol contents? Yes, exactly. Again, privacy and surveillance questions. Do we want to prevent drunk drivers that all give everyone an integrated breathalyzer? Currently, the approach has been no, even though we have the technology for that. Exactly. I think that shows that moral philosophy, you can't just purely evaluate it based on the consequences. It's tough to say that someone is less responsible because it happened to work out okay. But then if you evaluate based on the actions, that also leads to its own trickiness too because it's like, let's say for example, someone is doing something reckless but that's extremely unlikely to lead to something bad. So let's say for example, we're in California. Let's say someone is for example starting a bonfire with some friends in an area that is prone to wildfires. Do you want to punish every person who has started a small cooking fire with some friends the same way as someone who sparks an entire wildfire? That's again a tough question because in that case it wouldn't seem that punishing every person for a bonfire is appropriate if they didn't spark a huge wildfire. That's where one, it seems that the actions could lead to tremendous consequences but punishing everyone as if those consequences occurred seems to most people to be unjust. It's like reckless for some people to start a bonfire in a high fire area but do you want to punish them as if they've caused a massive devastation? Or do they have to have a fire making skill level at 99 before being able to, or like a threshold of like 20 or something? Exactly, but that also leads to another interesting question is how do you measure these things too? So it's nice to be able to say like we should set this bar for blood alcohol content or we should set this bar for like fire safety or gun safety or all these other topics but how do you take into account the nuances of different people's approaches? Like let's say someone's really good at making fires they have tons of experience handling fire safely but they're also like goofing around with friends and not paying too much attention. That might decrease their skill level to some extent. Do we cut off the threshold at a certain point? We say like oh no matter how skilled you are if you're like gathering with 15 friends and like distracted then you shouldn't be allowed to start a fire in these areas. It's tough to, one thing that philosophy definitely teaches is where you draw the lines on definitions is really really difficult. What is good is like a question that has been answered, asked throughout all of history and never answered. The Greek philosophers Plato and Aristotle in particular took a pretty good attempt at answering those but no one has good answers and right now we've kicked the can down the road. So like utilitarianism for example is a branch of moral philosophy that Let's hit the ball back on the first subject and then we'll get there. There's so much to even unpack just on that thing. I love the way that you go through these examples. I think when we're going to have to do more shows on just these examples of centrism and nuance because I totally agree with you that when you have an example like a motor vehicle and someone that gets away it's you called it a moral luck and someone gets away with driving drunk without killing someone. So the idea is then is there a potential way to get people even pre even entering a car that prevents them from driving that maybe gets them to call the Uber or gets them a friend pick them up or whatever right so all these other scenarios to help and then other ones like even to the fire example a one person calls a wildfire but millions of people make fires before that that go perfectly fine. So how to quantify these things how to see them in a nuanced way is so so interesting. And then this and then this does yeah this does kind of also when I ask how does psychology influence moral philosophy. So so there's kind of two branches of moral philosophy. One is like we about we should evaluate things purely based on like the appropriate theoretical frameworks. Utilitarianism is one Kant has another moral framework and they're kind of like removing themselves from from specific situations and just trying to reason about them abstractly. But there is also another branch which is let's look at the evidence that we have from psychology and use that to influence our decision making. There was a professor I had at Wash U John Doris who writes on this heavily and he has fantastic fantastic books on the subject. And the idea there is like let's look at the best available evidence that we have and try to work from there. Some philosophers don't like that because they think like our psychology is constantly being updated. We like don't have a good understanding yet. There's replication crises. So like why should we use this evidence when we don't know if this evidence is correct. In my mind I take the other approach which is the best data we have available is something we should be using and we should update it as we get new information. So one example that John Doris taught me was on helping behavior where if for example a lawn mower is running and someone drops a stack of books people are significantly less likely to help them just because there's this annoying noise in the environment. That's crazy. Yeah if someone finds a dime in the phone booth and someone drops books they're significantly more likely to help them. Yes a dime in a phone booth. Interesting. These are not experiments run a long time ago. It's roughly the value of a dime that we have today exactly. When these such small things can affect whether we help others. Oh the coffee one's another one. You either give them hot coffee or cold coffee and then you ask them what their relationship is like with their significant other or their parents or something. Exactly. And then I think colder coffee makes people feel a little colder about their relationship. Exactly. They're more maybe loving or compassionate. Yeah so for example it's easier to say like we should always help others or we have an obligation to help others but when the data shows that a lawn mower running makes us less likely to help others are we comfortable making these pronouncements about how people should act. Which kind of makes sense. When environment so heavily influences them. Yes which totally makes sense because when you have two humans that are in a quiet setting and something occurs your cognitive resources are really only being allocated to just a year of presence and someone else's presence and helping them in that scenario. Versus when there's so many other things happening in the background you're kind of like making sure that nothing is about to harm you or like something's going to fall on you or whatever. Like there's so many other things something could break over here. It's just loud. Like a loud environment with lots of different things going on. You have to like also keep things in your peripheries and whatnot. So that's a good one. And then I just want to also say on what you're just saying that depending on so many aspects of psychology of our biology like yeah a tumor pressing on the amygdala causing someone to go and potentially kill someone things like this that there's so many aspects that we have to be aware of in biases that we have to be aware of in the processes of philosophy. Yeah I think that again this gets back to the central point of nuance really which is where trying to try to predict how we act or how others act is actually extremely extremely difficult and our understanding of the factors influencing people is currently extremely rough. So incentives your gut microbiome there's like so many things flying around. And we understand almost none of it. So I think using the best assumptions we have right now and working from there is the best data available is what we should do. I don't think we should ignore this important important background because again if you think that someone should always act in a certain way. So Kant is famous for saying like you should never ever lie. And the example is always like well let's say someone is like let's say you hold nuclear launch codes and someone's threatening you and being like oh you need to get do you know the launch codes if so you need to give them to me. Kant would say you cannot lie and say I don't know the nuclear launch codes. Yeah stuff like that. Trying to make counter that are super extreme misses the nuance. Exactly so I think that's why the psychological element is really helpful because it helps us think through some of this additional nuance and maybe additional understanding of neuroscience will change our assumptions later on. That's perfectly possible. I think we can give another example I think super relatable to people which is that like most people have maybe like parents that are not necessarily as potentially progressively evolved as they are as their children are in some respects and so when their parents maybe ask them a question about one of their like beliefs or what not if they can rather just you know kind of like sidestep that like you're not maybe you're lying maybe you're side stepping but the point is that you can kind of you're saving time by not having to go into a subject that you have maybe a dozen times before with them that you know you're not going to be able to make any progressive ground on so you'd rather just go back to like reading or working on what you care about and hanging out with your friends that care about that same thing. So I think there's tons of nuance around that point. Yeah like saying something that's technically correct they're lying by remission like that those it's easy for Kant to say to never lie but if you like if you if you don't actively lie but you just choose not to disclose something is that also lying I mean that's you could do a lot of you could do a lot of philosophical work to try to figure out if that is or you could say that we should hold the nuance more nuanced perspective of like how we view how we view truth and lies. Yeah well again another whole we have to we have to explore these ideas more and these examples of these more this has so much to do with philosophy with our yeah with our psychology and our biology as well as the nuanced centrism. Let's do the best ways to organize society. Yes. Okay so what are the differences between and so this is the political philosophy side of things what are the differences between the utilitarianism Rawlsian liberalism libertarianism. Yes so I'll start with the utilitarianism first because that was kind of the first theory that came of that. Utilitarianism is basically says you should try to maximize net good Jeremy Bentham was one of the original thinkers on this Peter Singer is a modern thinker on utilitarianism and it's basically you should try to maximize like happiness for everyone so you shouldn't prioritize say like let's say there are three people drowning and you can save two people and the strangers or you can save one person and they're like your mother or father. Utilitarianism would say you should try to save you should save the two people who are strangers instead of the one person who is your parent and utilitarians then go into all these different thought experiments of like for example like if there's a doctor who is a serial killer but also is on the way to curing cancer are they saving more lives than they're killing by their like their cure to cancer compared to being a serial killer like should you let them walk free so utilitarianism leads to very tough moral trade-offs that's one issue with it. Yeah another one is like would you potentially do something like save five people's lives versus save the life of someone that's trying to do something like your cancer. Yeah stuff like that like how can you value someone's impact on the trajectory of the world and providing utility to the world versus five people that maybe not providing. And there's a great book called Strangers Drowning that addresses these questions directly. That's the name of the book. Yes yeah it's a very good book on these questions and very very approachable for people who don't have a philosophy background. Cool cool. But yeah so utilitarianism and then there's the whole question of how do we define net good like should you save should you save a doctor for example over someone else should you save like children over elderly people because the children will have longer lives. So utilitarianism leads to a lot of these tough questions and a lot of people don't have answers. So Rawlsian liberalism kind of came about as an answer to this. So Rawls basically says that you should have he's that sort of few obligations to society but I'll call out like the most important ones. One is that you should have equality of opportunity. Basically like you should have the ability to to to the ability to not be kept out of things based on like your gender your race your birth. Religious socioeconomic status. Exactly exactly like these things that these things that you can't control and that also don't affect your ability to do a job. It's like you should have a society where everyone's able to access like important roles in society. Maximal degrees of economic freedom. Yes and related to that he also has the idea that inequality is justified when it maximizes the position of the people who are the least well off. So for example if someone if there's a society where like everyone where one person has a hundred dollars and everyone else has everyone else has two dollars. Rawls would say that that's OK if that that inequality is OK as long as there's as long as it's the best possible outcome. So for example let's say like you said that person shouldn't have a hundred dollars they should have fifty dollars but that led to everyone else having one having one dollar instead of two dollars. Rawls would say that Rawls would say it's better for everyone to have two dollars and for this more significant inequality to occur if it maximizes the position of the people who are the least well off. But if there's a world where someone controls but in that those are those are the really most relevant points of Rawls. Humans flourish when the people that are least well off have the most degrees of economic freedom. And a closely related idea that he has and this is this is kind of an idea throughout like liberal political thought in general is like the freedom to pursue one's own projects. Yes. So like people have their own conceptions of what is good they should be able to do what that is. If they want to host a podcast interviewing thought leaders that is a fantastic that is a fantastic thing that they should have the freedom to pursue. If they want to go become philosophers that's something they should have the freedom to pursue. If they want to even count the blades of grass in a lawn and they think that's a good life they should have the freedom to pursue that. So it essentially allows people to form the projects that they think is best for the world. How do you economically sustain on counting the blades of grass in a lawn? Yeah. Yeah. We'll probably get to university income later. And then and then there's libertarianism and libertarianism says like the obligations of a government are not like are not like the freedom to pursue one's own projects and quality of opportunity. But it is like Robert Nozick's libertarianism is that like government should purely basically purely protect like life and property and that it should be a minimal state beyond that. And there's this interesting question. Rawls thinks that you should interfere to create an equal playing field. So someone should not be allowed to discriminate in hiring because those jobs should be up in everyone. Well, Nozick thinks that like that's OK because you're restricting the freedom of the person who is doing the hiring. So they have different approaches of who should have which freedoms, which is a really, really complicated question. It's a person that owns all of the property that is being protected by the government is also doing things like rent seeking on the people that are leased well off. Like these there needs to be an intervention to help the people have better degrees of economic freedom. Exactly. So like who's freedom are you preserving is a really important question between those two theories. Oh, one fun little known fact about Nozick. So Nozick actually, so there's a lot of talk in the media about reparations, reparations for African Americans who are affected by slavery. Nozick actually says that like society must come from a just starting point for this libertarian society to be just. So you must have basically like free and voluntary transfers of property from the starting point until now. Slavery was not a free and voluntary transfer at all. It was treating people as property, which is incredibly unjust. And the interesting thing is that Nozick actually alludes to the fact he doesn't say that he doesn't say explicitly that like reparations should be pursued, but he does say that reparations are important because these injustices have occurred to have this libertarian society. So like when people talk about like libertarianism in America, it's not Nozick's libertarianism because we haven't set this equal starting point exactly. Which is really interesting and I think that also gets lost a lot of discussion of like, I mean, there's other libertarians who believe that like the starting point we have right now is a just one, but like specifically Nozick's libertarianism would not support libertarianism today. We would need to equal the playing field before we can pursue it, which I think makes it a lot more palatable when compared to Relsian liberalism because the idea of like if we said right now like it's okay to like, for example like discriminate based on socioeconomic status, that would be incredibly unjust which is what's happening. Yeah, which is that are being incredibly unjust because people have been held back from accumulating wealth based on the actions of the government. And private enterprise and just greedy corruptness in general, rent-seeking behaviors, self-dealing behaviors, all gobbling dynamics. Yeah. Some of these aspects that have been the one thing that I think you said that I think everyone can resonate with is that the society will flourish best when the people that are worst off have the most degrees of account freedom when they're taken as like these are literally our fellow humans that we can help maximize their potential. And so to view it that way I think is one of the best things versus yet basically what it seems like at times is that literally moats or walls are being created that prevent people from pursuing things purposefully by people that are again self-dealing and all these types of things. I personally lean towards the Relsian approach, but I think that there's validity to all of these approaches yet they all have different setting points, different trade-offs. This is definitely going to be a theme we keep returning to. Yeah. And then if we were trying to get to some things that were a little bit of, you know, we kind of started exploring some of the things that are net good or net evil. But do you think there's something else that's just coming through humans that is evil that is at play on the planet? Like is there something evil in the world right now? Yeah, and even potentially beyond the planet that is coming through humans that is performing evil on the planet. Interesting. I think that that depends on people's own personal views of like spirituality and good and evil. I personally would like, I personally don't know my own thinking on this and probably like if I don't know my own thinking I'd lean towards evidence. But I think that like the problem of evil in the world is like a very, very tough problem to understand. And I think that it's interesting how, like if you think about it, like a society with wherever it cooperated perfectly, where ever it maximized everyone's interests, I would say like a utilitarian society where like everyone lived the utilitarianism would be a society without, depending on how you define it, without evil, without like people harming others, let's say. We wanted to find evil that way. But at the same time like we have a world where there is clear unnecessary harm. There is like harm from poverty, harm from oppression, harm from things along those lines. And those, those harms that occur don't seem to maximize the good of the world. Like harm through poverty and significant inequality does not seem to maximize the good of the world. It's like that's where the Rawlsian approach is good of like, of like we should be maximizing the position of the people who are the least well off because that's probably a better world than a world where people are in more extreme poverty, let's say. And another thing that I really like about how you take certain subject matters is you like you kind of know where your world view is and then you know that if someone potentially asks you a question that you haven't necessarily thought about like a lot, that then you'll default to saying that you know I haven't thought about that enough. And I think that is a very humble way that I think more and more people can embody behavior because our world view obviously doesn't have answers to everything. So I think that's very important. Socrates famously both acknowledged that he knew nothing and also thought that he was the wisest person for that acknowledgement. Yeah, that's a good one. So we were talking a lot about experimentation in organizing society, systems of governance across the planet. They're kind of like little permutations that are running across and like we can kind of figure out what the best lines of code are across the U.S. or China or certain African philosophies or South American philosophies or wherever around the world. And I think that's a really interesting way of viewing it is how can we potentially make little special economic zones with these different lines of code and see how they evolve. And I think that's definitely going to be a future of like the little pockets that are competing in terms of what places people want to live and what are the regulations in those areas. I think that's a really good way of putting it. You were actually in a class at Stanford taught by Peter Thiel and Russell Berman called Sovereignty and Globalism, Globalization. And I think this is a really interesting point that we were talking about before we started that nationalism and globalism don't really understand each other. Yeah, so teach us about that. Yeah, so I'll start with how each side defines their own viewpoint. So nationalism views themselves like enabling experimentation among different countries. So systems of governance, for example, like the U.S. and China have radically different approaches to governance. We don't know which one's the better approach right now. And it's best to have a world where these countries can try to figure out what the best way to organize the world is. And nationalism sees themselves as like preserving state sovereignty to allow these experiments to happen. And that's how you really make progress. That's how you discover what the best options are. Globalism sees there to be significant downsides from some of those experimentation and it's better to have a universal organizing principle that promotes peace, for example, like how a lot of these international institutions preserve peace rather than war. And they see nationalism as something where you could lead to societies where they are not good for their citizens and should we prioritize a system where countries could do bad things to their populations as well as good things. And I think the issue is that both sides like really, really don't, both sides really are kind of talking past each other. That's one takeaway I got from the class. So for example, the nationalism believes in preserving state sovereignty. As long as it's not imperialist. As long as it's not imperialist. That tries to define it as basically nationalism is when these countries have their own experimentation. And that if they try to impose their will on other countries, that's actually not nationalism. That's imperialism and that would fall more on the standpoint of like globalism instead. It's imposing your will on other countries. This is so interesting thinking about like tribes that are first evolving on the planet and how they have their own like little kind of like pockets of like nationalistic beliefs as long as they don't go and try and conquer other tribal areas. But rather maybe they can also have a globalized ideology where they have like free trade between each other. Exactly. I like this. And then there's also, then there's the globalist approach which is where you say these international institutions are what preserve the peace. They might like for example protect smaller countries that can't defend themselves. And that nationalism saying that nationalism preserves state sovereignty is incorrect because if a larger country invades a smaller one, like we should do something about it. We should set up systems to prevent this. So like I think the key problem is like what happens when countries don't respect state sovereignty? Yes. Nationalism says that's like that's an imperialist problem. That's on the globalist side. And then globalism says like actually that's a problem with nationalism. That's because we don't have these international institutions to protect these countries. So they're kind of like both claiming that the biggest downside is the other sides. It's like a responsibility to the other side. And I think that until these like definitional questions get sorted out we'll just have people talk past each other for a while. Yeah. It's a nuance unpacked in a very understandable way. Both. Both of them. It's a question of what better preserve state sovereignty like an international institution with some rules and norms but that like protects small countries from being invaded or a system where different countries can experiment. And they believe that these countries experimenting won't lead to invasions of other countries because that's actually an imperialist action. It's been a big game of risk up until this point. Yes. Yes. Yeah. So I think that it's that's something that's a very significant divide in society and it's tough to make progress in these discussions when both sides view the problems as the responsibility of the other side, not their side. But yeah, that's that's another area where like these are these are very complicated tradeoffs and their responsibility of like who protects what protects the smaller states. Correct. Is a really, really. Really, really hard. Yeah. I think a potentially good example could be that when you look at an like an indigenous tribe somewhere that is that someone is trying to colonize that area where they are. Like how can other people from around the world intervene in a way that does not cause a massive war? Yeah. But that still protects the indigenous tribe. Yeah. So things like this. And I personally I personally don't take a stance on which one is right in these questions. I just think that like we need a better understanding of what even these two sides are saying before we can start to understand like which way is a better organizing principle. And pressing reset on civilization and trying to see how the game of risk could have not evolved in a imperialistic in a colonized in a pillaging and plundering resources and murdering people sort of way is one of my favorite sort of ideas of simulation is really trying to think back like how could have we evolved in a way that was more harmonious with nature with each other. Yeah, these types of things. And each side that harmony is actually the perfect example. And I think that their approach is more harmonious. Nationalism sees it as all these independent sovereign states that can choose to collaborate how they wish and that they can choose to choose to choose to operate systems of governments governance as they wish. And then the the the the globalism standpoint says like actually what's more harmonious is uniting these states together under a system of common rules. Preserving the peace under either system is a really really difficult question. But they both are trying to seek ways for different people to live in harmony. They just have different approaches of how to do it. Yeah, yeah. Definitely one of my favorite parts about our conversation together is just how much you are literally the embodiment of like centrism and nuance. I love that because I like I've changed the middle name of myself across the platforms to nuance purposely to try and get more people to think about things in this way. Let's talk about how cryptocurrency changes governments and the best funding mechanisms for UBI. Yes, so cryptocurrency change governments. So there's a great book on this called The Sovereign Individual. It was written before 2000 and actually predicted all of the cryptocurrency revolution that we see today. The interesting thing about the sovereign individual that I think people undervalue with the whole cryptocurrency revolution is the sovereign individual views cryptocurrency as a threat because governments cannot tax it. It's a threat to the current system of governance because of that. Because it's money that can be moved easily in some cases moved untraceably. And that acts as a direct threat to the current systems that we have. The sovereign individual believes that this will... The book believes that this will lead to governments essentially competing for winning over citizens. And this kind of ties into our nationalism and globalism discussion from earlier. They believe that governments will put forward different incentives to start different industries or to get people to move there and that governments will serve their citizens best if they have this basically like competition for citizenry between each other. But on the other hand, if governments lose the power to tax, that could also lead to more oppression or the banning of these technologies in an attempt to preserve a status quo. And I think this also works really interestingly in terms of special economic zones which I am fascinated by but have not read deeply on. This is just my off-the-cuff thoughts on this. Is that special economic zones are essentially a way to say, hey, we think that this system might be an interesting system to try. We're going to put it out there. People can choose to participate in it or not. And it's interesting how the US does not really pursue this. The federal system of allowing states to put forward different regulations and policies is one way to do it. China has the approach of special economic zones for specific cities. It's interesting how there's the question of how much leeway do you give on a local level and how much do you preserve at a federal level. And different countries have different approaches. But I think this experimentation is important and I think that this experimentation of testing different methods of how to live is not done enough, especially in the United States. It seems like cryptocurrency and decentralization technologies are coming in as the exact contrast or antithesis to what happened with setting up the Federal Reserve and things like that and having fiat currencies circulate around the world and these types of things. So you're right though on taxation. How does a country figure out how to do that and support its construction of roads and police stations, fire stations, all these types of different hospitals, all these types of schools. So it's actually difficult to figure out but could there potentially be a way to figure out how to do that? I think with the addition of cryptocurrencies, I think that would be fantastic. Let's go to UBI, best funding mechanisms for UBI. Which ones are even politically feasible? Yes. So there's a bunch of different ideas for funding UBI. And I think they all have their... There's one I personally am in favor of, but there's some catches to it that I'll get to. So the first approach is let's tax laborers. Let's tax people's work and we'll use that directly to fund a UBI. That in the case of massive job loss leads to reciprocity concerns. So the concern is will the people who are working be opposed to a UBI because they feel like they're working and other people aren't and that their money is being taken from them and given to others. There's concerns around taxing labor in that way because of questions that people have of fairness. And then of course there's the flip side of fairness of like if there's so much job loss that some people are just unable to find jobs at all, is it fair to have the people who are still working think that they are entitled to more when some people again just based on the lack of like how they happen to be born into certain circumstances or the skills that they happen to learn that became irrelevant should they be punished for those when in another scenario they easily could have ended up being perfectly okay. Then there's the other approach of like do we tax capital? Do we tax companies directly? And I mean that has similar trade offs towards taxing labor. The third approach that is under discussed in the UBI literature but one that is really, really interesting to me is instead creating a citizens dividend or social dividend this is done for example in Alaska, based on the revenues that they received from their oil resources have invested those and then give every person I think $2,000 a year. The Eastern Band of Cherokees has a casino that is able that generates enough revenue to give every member of the tribe $9,000. I think $9,000 per year which gets pretty close to Andrew Yang's proposal of $12,000 per year. I think another approach that we could take to funding your universal basic income is to start investing this money now from say like investing in companies that are in the technology industry or investing in industries that we think might grow if automation occurs and then using the profits from those investments to distribute it back to the citizens. So the idea is like let's tax everyone now while many people have jobs put that money away for a while and then when there's massive job loss we pay it out based on the investments that the government has made. There's ways to do this in ways that preserve free markets. This is interestingly enough, this could be taken as both a libertarian idea and a socialist idea. The socialists say you're buying stakes in companies on behalf of the government to give citizen shares on them. The libertarians say it's the best way to avoid taxation and to preserve people's choice and it's the best way to preserve the free market. It's interesting, the implementation matters a lot in basically which view you take on. But this is my own personal take on the best way to fund UBI. This is not as discussed in the literature as the other options but of course the catch is that it raises a lot less money than the other options. So that's the real trade-off. It's like this thing that sounds like everyone wins, like let's tax everyone now let's put the money away for a while, let's distribute it when people are worse off. You might not have that much money to distribute and you might need to lean on other funding sources as well. People need meaningful endeavoring every single day when they wake up too. That's a massive part of this. And that's another debate in when it comes to automation is some people think that automation plus universal-based income will free people up to pursue this Ralsian conception of their own version of the good life. Like you can wake up every day and make art or volunteer or do whatever else you want. And then other people are like, the work and the paid work is what provides dignity and meaning. When you lose that, you lead to people like, you know, the classic good example say like what if people use it to buy drugs or what if people like sit on the couch all day and play video games. Studies show that that tends to not happen. But there's this approach of does job loss enable freedom or does it harm people's freedom and dignity. And you can see that directly in the current proposals. Some proposals that Congress is talking about and these are all more discussions right now, nothing close to being lies. Let's give people jobs and say infrastructure similar to what we did on the New Deal. Let's have people focused on specifically, let's focus on offering work if people want it. And then the other approach is like why force people into work when they could be doing other things, things that are difficult to value. Like if someone's taking care of their children or taking care of their parents, we can't really put a dollar amount on that easily. But we can say that's a valuable thing to do. So why don't we help people make decisions on what they personally value and that is hopefully more meaningful than the idea of paid work. It's a complicated future with distribution of figuring out how to do wealth and equality and figuring out what cryptocurrency has to do with this, figuring out how automation is going to play out into this picture. Cryptocurrency is an interesting one. I never made this connection before, but if you do believe that cryptocurrency will harm the government's ability to tax and you also believe that funding UBI requires some substantial increase in taxation, if those two things conflict, we're talking about two unlikely scenarios in a future where cryptocurrency is dominant and harms taxation and also where there's massive job loss in automation, but it's not impossible. And I think what happens then, as far as I know, there's not any literature on this. No one's really thought about that. But if anyone watching is interested in how do we fund UBI when we can't tax people and there's also massive job loss, that's a scenario that could play out. Yeah, tendencies to maybe want to give into the helping people that increase their degrees of freedom. There's all different types of potential solutions. How about, are we in a simulation? Yes, are we in a simulation? That yes was yes to like, that's a great question. I personally think that I don't know what the likelihood of if we're in a simulation or if we're not is, but I do think that there's really interesting questions related to this. So I agree with Bostrom that three scenarios are possible. One is that society will never advance the point where a simulation of a universe is possible. Another is that it will advance that point and we're the first society to do so. And another society has already advanced that point and we are in a simulation because they were first and we weren't. I don't know which one's most likely, but I think there's a really interesting underexplored implication of the third option that another society advances this point first and we're in the simulation. And so Bostrom presents a few reasons for why society might bother to simulate us. It seems like a rather difficult project to undertake. One potential he gives is that the society values human life and values the idea of more human life existing being better. But I think that's a tough one because then why is there death and why is there evil and why is there suffering? If we value human life, why aren't we all living in the Garden of Eden, perfectly blissful, perfectly happy, perfectly ignorant? Why do we have the somewhat broken world that we have right now? So if the simulation exists because we value human life, I think there's unanswered questions about the problem of evil. Bostrom addresses these, but I don't think the answer is totally satisfying. He says, well, maybe the idea of pain itself is simulated, but he has this really amusing aside where he says, and this is not much consolation to the people who are currently experiencing pain. It's easy to say maybe there's not actually suffering because the pain itself is not something people are having, but if you have the conscious experience of pain as everyone does, then it's tough to take that stance. And Bostrom kind of leaves that unanswered. The second approach is maybe they want to learn something from us being in the simulation. Maybe they're experimenting with systems of governance, ways to organize society. What happens if you have these ideas? And in my mind, there's two scenarios there. One is that it's dependent on us not knowing we're in a simulation, which could explain why it would be impossible to discover this fact. Bostrom talks about how the system can create the appearance of consistency. For example, when you look under a microscope, it bothers to simulate all of the organisms inside, but when you just stare at a glass of tap water, it's not bothering to simulate it to reduce processing power that it requires. So if we're in a simulation that depends on us not knowing that we're in a simulation, why would we be allowed to think of the idea of a simulation in the first place? So if we're supposed to be ignorant of this fact, why can we think of it at all? That's what makes the game fun. Exactly. And then if it's neutral to us realizing we're in a simulation, then why haven't we been able to figure it out yet? It's hard to probe at the simulation. So either trying to deceive us, in which case it seems easier to just never allow people to think of this idea in the first place, or it's not trying to deceive us, and in which case it's unclear why we don't have answers yet, unless the answer is we need our research methodologies to develop more. We ourselves are an experiment and we ourselves will be creating experiments, and then it will be more potentially easier for us to realize that how we're already in one as we make our type thing. Oh, well, why wouldn't you just make it the Garden of Eden? Well, that's not fun. We need to add the good and the evil to it, and that's what makes them in the simulation have more. Right. And that's what we're in. Maybe the answer to the simulation is trying to learn something from us. If we are in a simulation, in which case the question is what would they be trying to learn? They would be, in many ways, trying to learn, I think, how a civilization evolves, how civilizations evolve around. This is a very fun one, these little planets around stars like this little universe design, and there's other ideas of universe design where different creatures could evolve in different ways. And then last question is, what is the most beautiful thing in the world? I think that it's still staggering to me how much of the world around us is actually imagined based on these collective convictions. All of philosophy is fundamentally imagined. It's imagined based on certain starting points and certain axioms, and then beyond that, you're talking about entire ways to organize how people live based on these abstract ideas that there's no physical instantiation of utilitarianism. It's just this idea that exists. Rawls thinking up the ideas that he did has a traumatic influence on actually whether people are happy, whether people are suffering. I think the ability to come up with these concepts that don't exist at all in any physical form, yet still influence the physical world, are something that these imagined realities are still kind of staggering to me, of someone thought up the idea of the nation. So now we have nations. There's borders, for example, or borders or races or things like that. They exist because of the categories and things on those lines that we've defined. You could define different categories. You could define no categories at all. I think people on different sides of the political spectrum take very different approaches to whether these categories exist, how they should be defined. But it's interesting to me just the most beautiful thing is that we've constructed a world well beyond what just physically exists around us, and I don't think we acknowledge that enough. That's why if I wanted to call the action for why study philosophy, the reason to study philosophy is because a lot of these questions of how we should live our lives, how society should be structured, what is good and evil, they're things that we will never be able to find in the physical world. I mean, some people have the idea of maybe with a perfect understanding of human brain, the human brain will have a scientific idea of happiness, stuff like that. But I think that these questions really come down to how we imagine the world to be, the principles we take on and how we should move forward based on that. And philosophy is one of the only tools we have to explore that. I think that because so much of our world is socially constructed, there's limits to what science can tell us, and philosophy is helpful for probing those limits and understanding what we take for granted and what we don't. And that's what you think is the most beautiful. The fact that we have imagined these structures that exist that allow us to cooperate, but then also that we have tools in place to shape and probe these systems. For example, utilitarianism can change the world purely based on the idea of how we should live. If people adopt this idea of maximizing that good, whether or not that's the best approach to live is unclear and also a role of philosophy, but let's say people adopt this approach of maximizing that good and they take that on, you've changed dramatically people's lives based purely on an idea that Jeremy Bentham thought up a few hundred years ago. That's staggering. Is philosophy the path to the most influential career? I think there's a small number of philosophers who are extremely influential. Peter Singer, Jeremy Bentham, Kant. The Greek philosopher is Rawls. Then there's a lot of philosophers who are not influential at all, but I think if you want a low probability but extremely high impact potential path in life, thinking about these ideas and studying them either formally or informally is a very good way to do it. Well, thank you so much for coming on the show. It's been fantastic. Absolutely. We've had a great time as well. It's been super fun packing all these subjects with you. Huge shout out to everyone for tuning in. We greatly appreciate it. We would love to hear your thoughts in the comments below on the episode. Let us know what you think about everything from privacy, centrism and nuance, organizing society, all of the different things that we talked about, cryptocurrencies, UBI, simulation hypothesis, so much of the good stuff. Just let us know your thoughts in the comments below on that episode. Also, talk to more people, your friends, families, co-workers, people online on social media about these subjects. Get talking about them more. Have that nuance-centered discourse about it, not polarized discourse about it. Shout out to Ron Vagus for producing and directing the show. We greatly appreciate it, Ron. And also, support the artists, the entrepreneurs, the organizations, the scientists around the world that you believe in. Support them. Help them grow. Check out Will's links below. Also, check out Simulations. Support us, Patreon, PayPal, cryptocurrency, all our links are below. Design cool merch and get paid for it. That link is below as well. And go and build the future, everyone. Manifest your dreams into the world. We love you very much. Thank you for tuning in. We will see you soon. Peace.