 All right, we're going to go ahead and start our panel discussion where you guys can ask questions to our keynotes, and our keynotes can answer and interact and engage with anyone another. And I'm going to be here to kind of just make sure that everything goes according to the plan. Okay. Yeah, exactly, exactly. So anybody who wants to come up and ask a question, go ahead and make sure that we talk into the mic for our audio back there. Sam, all right. So it seems like that you guys are like on the opposite end of the spectrum in terms of the innovation, the, you know, the emergent behavior and the, you know, it's just a virtual machine copying everything that it does. How do you resolve that? Because it seems like the biological, you know, forces of nature that are embodied in humanity are going to find expression even if we can do emulations that might, you know, change that, you know, either your thoughts on that. The word biological carries some baggage here. If we think in terms of, you know, entities that reproduce and have differential variation in selection, that continues on in the scenario I'm talking about. It has a different hardware embodiment, but a lot of the insights of biology would continue to be relevant in the world, like that, including emergent levels and things like that. That's what I was thinking when you were talking, actually, because what I do in my modeling is I do a lot of agent-based modeling. And so, presumably, these M emulations are embodied in a sense of code that you could, you know, there's the structural code provided by the brain scan, and then there's the software that looks at that and does the work, makes it look like a brain, does the process part of that. And it's interesting to me that this would be a scenario that you could really set evolution to work here, to the point where they might evolve into something really wild and different that, you know, these fast ones. You put a bunch of physics brains to work in, say, solving some particular problem, and they might say, oh, this is very inefficient. We can, you know, tell the factories to do this, and suddenly this thing evolves that's very different than it started out. If you have mechanisms to have variation and selection on, these are really good M's. They're solving physics problems a little bit better, so we'll use those to create the next generation. We'll modify it. We'll look down and modify it. Now, I don't think they're separate. I mean, I think that the basic ideas of novelty would occur in M-world, and maybe to the point where they'd survive even longer. They'd evolve to some new thing. I actually have a science fiction story that was published in Nature Futures about AIs that physicists start using to do astrophysics. And all of a sudden they're asking for more data. Their unit of conversation is the scientific paper, and they begin to write scientific papers on astrophysics problems, and pretty soon the physicists can understand the language that they've evolved into, and the files get bigger, and they have no idea what these agents are doing. They can't read the papers, they go beyond, and I could see that happening. I mean, this actually makes some sense to me. This would be a way of simulating intelligence that makes sense. Biology is one of those squares that wasn't filled in on my graph. Yeah. I actually noticed that. So I've used a lot of economics on it, but I would love for a good biologist to try to apply biological concepts to the scenario work on it. I'm sure they would come up with some insights that I would want to know. Yeah, I think, because I think the potential is there to turn this into an evolutionarily expanding system. I mean, why not? Along those lines, when I hear that there is likely to be 500 types of M's, give or take. In the first two years. In the first two years, because who knows what's coming after that. I sense some tension with what I have been educated in in regards to the value of biodiversity. So I'm interested in both of you commenting on that tension that I sense in that. Expand a little bit. So is there no diversity on earth or within the M, the diversity in the M? Within the M environment now, I don't think there's such a thing as a perfectly closed system. So the latter becomes relevant. But I'd be happy hearing your perspective on the M environment itself and then even beyond that and how they would interact. If you ask in biology, what's the optimal variety within a species? Those species don't actually embody that much variety. They embody a variety in their DNA as resources that they can pull in. But the actual variety in the population distribution is actually relatively small. You might say that an ecology is better when it has a lot of variety. And I will defer to you about what the evidence actually is on that. But that's not something anyone local is in control of. You could say the same thing about our industry world today. We have a few dozen suppliers for each industry. Then you might think that's too low a diversity. But again, from each customer's point of view, they're getting what they want. So it's kind of hard. Just like the species. You might think the world would be better if the species had more variety in them. But the species are competing and they choose the variety that they... Yeah, and the way the variety plays out is just in selection. Those, if you're just slightly better than your neighbor, because of the variation, whatever variation's there, that's what gets selected. And that's, they become sort of the evolutionary basis and trajectory for that. With the rapidly changing environments, you can select for more variety. And that's one of the things that helps select for human generality, because human generality is a way to generate variety. So, but the M-world doesn't necessarily have enormous rates of change. Just, it's not a rapidly changing environment. The weather matters less for them, for example. Would external weather increase the odds of us choosing to introduce more enemies? Well, I'd be really tempted in the M-world if I were setting it up originally to really add a source of variation, so that I could put a selective component in those that behave the most efficiently, or do their jobs better, or accomplish what they're supposed to do. And that would actually let this kind of evolutionary scenario flower. So, it's hard to know without really understanding what kind of software we'll be using things, but one of the things that I've used to try to capture that in my agent-based model, so my agents are Tetsi flies flying around Tetsi-world, and they're not EMS, but they have, they don't have a neurology, but they sense their world. They can see where there's a prey, zebra, to go over to it. There may be a mate there that they can mate with. And one of the things I can do is impose variation enough that I can get a selective regime. So, my Tetsi flies actually have a genetics that has random mutations in it, and so they can get better at using the resources of their world. They can find the prey more efficiently. They can be more successful in finding mates or breeding or harboring disease, or whatever, or not harboring disease, all those kinds of things. And so, it's not quite M-world, but it has some of the same components. So, I'm using, I'm creating a virtual world with flies that have some basic senses that view their world. And evolutionary algorithms are what I use to try to enhance the information that I'm being provided with about that virtual world. So, I could see that being a part of M that would be really interesting, actually. Just to be clear, we're in a world today where we know there's a bunch of parameters of our world that are just wrong. But nobody runs the world, so nobody runs the world. So, in a future world that nobody runs, that can also continue to be true. There could be too much or too little variety. There's just lots of ways the world could have parameters that are wrong, but if nobody runs the world, that's the way it happens. Just like today. Yeah. So, we're in a lousy simulation here in this. Nobody's managing the doctor. I'm working on it, okay? Might not be the right question for this panel set here in the subject matter, but I'll ask it anyhow. I'm not sure if I have another time to ask it. I'm wondering about the role that religion plays in modifying transhumanist approach, so I'm wondering if there are Jewish, Islam, Hindu, Buddhist transhumanist associations, and if not, what, how might they modify this sort of consideration? That's an interesting question. I'll defer to you, but could Islam appear in the M? If many of the M's were scanned from Muslim brain... Yeah, I'm pretty sure it didn't mean to be asking about M's. We'll answer it anyway. Yeah. I mean, so actually, I think most, like the guy who talked about religion said, most of our religion started the actual age. They survived the Industrial Revolution just fine. I got to figure they survived this transition just fine, so I got to predict all the same religions, or the religions for the M's. So there'll be Mormon M's in there having their Mormon transhumanist meetings, and yeah, I like it. If there are only a few hundred, it's highly unlikely any of us are among them. I think he was asking about transhumanist today, and it's toward transhumanism. How does our various religious backgrounds affect that? And that I would defer to people who spend a lot more time in. Yeah, I'm new to it too, so. I think as you're seeing the evolution of religion happening, that you're going to see this happening across the board with a lot of religions, this isn't something that's just specific that's happening to just Mormonism or just Christianity. As technology rapidly increases, you're going to see this trend happening across the board to various religions. And if there's anything that religions are really capable of, it's adapting to survive, because we do this as societies together. So I see that definitely happening. Quick comment to answer the question explicitly. There is a Christian transhumanist association that has 250 members about. Micah, where's Micah? Micah would be happy to talk with you about that. And there are smaller religious transhumanist associations that have been around. Some have died, some have survived. But the two biggest right now are the Mormon Transhumanist Association and the Christian Transhumanist Association. So the question that interests me is, would we send missionaries into the AM to try to spread the influence of Mormon AM? Yeah, wait, yeah, I'm trying. So you mentioned the three critical technologies. Computers, the ability to scan the brain at a very small level. And then the kind of the emulation of actually knowing how it works. Now, I'm obviously not an expert, but you hear things. And the one thing I keep hearing about or the thing that kept coming up on the subject was that they were trying to emulate a roundworm and had 302 neurons. And they're having a very difficult time doing that. So I guess I'd like you to speak to reasons to be optimistic about brain emulation. And why it's proceeding faster than artificial intelligence, true artificial intelligence. And maybe you could speak to reasons to be pessimistic about this sort of biological emulation. Sure, so we actually have decent emulations of the first few layers of visual and auditory input in the brain. That is, people have made prosthetics for that. So whatever those cells are doing, we've had successful emulations up. Of course, brain cells elsewhere do other things. But notice that in almost all organs in the body, the thing that each organ does for the body is usually pretty simple compared to the complexity of the cell itself. So a bone cell or a blood cell or a muscle cell. Each cell is enormously complicated because all cells have to be complicated in order to reproduce. But the thing they're doing is really pretty simple for the rest of the body. So the key question about the brain cell is how much is each cell doing for the computation of the brain as opposed to all the other things cells have to do? So basically, the more complex it is, the longer it'll take. But my story doesn't depend much on when it starts, you see. If you say it takes longer, it takes another century, I go, okay, it happens a century later. But still, it plays out the same way. So it's more about is it even possible at all? So then you have to tell me that somehow what these brain cells are doing are so complicated and so intricate that we just never figure out a way to model them. Yeah. So simulation is always going to be, typically simulations are used to answer specific questions. Simulating the brain is going to be really hard. And the reason is it's a very, very large number of system. I mean, I heard, but I can't remember. But the number of connections between the brain is some really astronomical number. I mean, you think about each brain cell having 10, 20 tendrils connecting with 10, 20. The numbers get astronomical. 10 to the 14. 10 to the 14th. And that's a lot. That's a huge number. And so not only do we have to encode that information and those connections, we have to process things that are moving with those connections, forming new connections. The brains make new connections. They do all kinds of things. So it's going to be really hard. The only way I think that computationally in the near future could happen is with quantum computing. Quantum computing adds a level where that kind of number isn't intimidating. That kind of numbers in the current computational environment, even if you add Moore's law, is not going to capture that kind of complexity. Because it's complex just in terms of the number of connections and things that are going on. But managing those connections is going to take a lot of computational overhead. It's an incredibly computationally hard problem. But I don't really see it as anything but a technological problem. I think the one place where I might be more optimistic is in the ability to, by that time, have non-destructive brain scans that would allow you to scan a brain, create an emulation of it. And all this is hypothetical. I mean, we don't have that technology and we actually don't have the computational ability right now. But these are technological problems that we see progress. I mean, the first, we suspect, there's some question. But it looks like quantum computers are going to be a thing soon. These problems are being solved. And if we get that, then we have the computational ability to handle those kinds of large number systems. There's only a few systems that quantum systems have a speed up on. So there are a few particular systems, including search and factoring. Right. But most random computation actually doesn't get sped up much by quantum computing. Is that right? Is that right? Yeah. So then it's a real problem. I'm thinking about things. And those might be the big problems though. It might be kind of quantum computational parsing these large number systems in to simulate, simulable, is that a word? Sure. We're treating this. Okay. Oh, thanks. Yeah. But it is a massive computational problem. And it's going to take some technology to get us through there. I'm optimistic that it'll be there. That it'll happen, you know, if not this century, you know, as you said. A century or two. A century or two, yeah. But I don't think there's any conceptual problem with a brain scan. And if we can do that, that's going to be amazing. So I have a question for both of you. One of the critiques we get as transhumanists sometimes, at least I get this critique, is this idea of we're taking evolution into our own hands. And what makes us think we can do a better job than what evolution is doing? And so in many ways, I've thought of technology as just the extension of evolution. Just evolution doing its job and doing its thing. But what would you say to that critique to try and explain to someone how technology is just a natural part of the evolutionary process? Well, first you introduce the idea of cultural evolution. You say that humans introduce cultural evolution, which allowed a much more rapid evolution than a genetic evolution could embody. And that's been going on for several million years. So it's not a new hypothetical. So we understand and that's a decentralized process. That's not us taking control of evolution. That's just us being the embodied elements of the evolution process. That is, we embody culture in our heads and we share it with each other. We are evolving culturally. So this is just another continuation of that sort of cultural evolution, basically. So to you, the evolution of the end would be just as much as natural as... As human cultural evolution has been for two million years. Yeah, I mean, this will sound weird, but I think everything that humans do is natural in a sense. I mean, the idea that humans have escaped nature, I think is wrong. And so I think that's right. I think cultural evolution, I think the evolution that we see, the fact that it's been an evolutionary process has produced iPhones and all these kinds of things. And I think that that will continue. I don't think we should be above a cultural critique of what those kinds of things do to people or things. But I do think that, yeah, I don't think... I think that it's unnatural or plain God. Evolution does not select for making people happy. Right. Yeah, that's exactly right. It doesn't select for making animals happy. So we should robustly expect an increase in future capacity and increase in abilities that doesn't necessarily mean an increase in satisfaction. No, we seem to be no happier than people in the 17th century or the Middle Ages or any time. So yeah, I'm not sure. Yeah, a question for both, but it comes from something I recall Robin saying, something about if the M or if the being in that world isn't doing what that world wants, it's not going to survive. But what is that want? Who determines that? Talk a little bit, maybe question about what determines want? What is want? Where does it come from? Who determines? That's where this biological analogy is useful. When you ask what do most animals want? What do they do? The rabbit has to do what it takes to survive, whether it wants to or not. In some sense, evolution changes its wants to induce it to do the things that evolution needs it to do. With humans, we have many ways to change what we want. It's probably too slow to change genetically, but we have a whole lot of cultural ways to change what people want. And so there'll be cultural selection for getting people to want to do the things they need to do. So think of 1,000 years ago subsistence farming. Farming isn't a natural thing for humans. Humans evolved as foragers, right? So what if farmers doing all this farming for 100 years ago, and but they were really right on the edge. They had to do farming it pretty efficiently or they didn't get enough food to survive, right? They did it for 1,000 different reasons in their head. They had all sorts of different reasons for each thing they did, but never the effect is if they aren't acting efficiently enough in their farming behavior or their war behavior or things like, or they're raising children behavior, then they get selected out. So evolution has been just selecting the way it's been for all animals for us. Yeah, I think that's right. A lot of the things that we see in humans are in response to things I was just reading about the human necessity of gossip. So group selection theory suggests that being in a group, acting as a group is really hard because cheaters always have the advantage. If you can be in the group but not pay group costs, you're ahead of the game. And so humans have evolved these strong policing mechanisms to keep the group cohesive. And they play out pretty much in terms of guilt and shame. I feel guilty when I'm not contributing to the group. I feel the group will then shame me if I'm not contributing to the group. And gossip is one of the ways that group behavior is policed in ways like that. And we want to live in a world without gossip or that would be better, but evolution needed that to form human groups in the first place. So it's a really interesting... I actually make a stronger claim, not that I know you disagree with it, but just a lot of futurists or transhumanists think that most of our human mental capacities are inefficient and irrelevant. And as soon as some reasonable computer competes with us, we're out of there because we're just so stupid and inefficient. But in fact, most of these mental capacities are relatively robust capacities for dealing with complicated social worlds. And these future worlds of M's or whatever are complicated social worlds. So whatever replaces us will have to solve all of these problems too, nearly at least as well. That's the problem of looking for cheaters, sharing information, figuring out how to present yourself well to others, all of these things that we're doing. So love and status and envy and all these things we have that seem like that people think of as just some random human feature that's useless and inefficient computer will get rid of it. These things are here for a reason and relatively robust reasons Are there evolutionary consequences of that among the M's? So for example, would that lead them to be different communally than we are in some substantial ways? Yes, a bit. I discussed that in my book actually. So there are some ways in which looks like modern social worlds are different from ancient social worlds, such that some of our behavior isn't very optimal. So we are probably a little too trusting of each other in larger worlds. Most people don't play enough office politics for their personal benefits. So I predict in fact M's will learn to play office politics better than you guys do. So I have a question respectively for Steve and Robin. But I want to kind of proceed that with one for Robin so that you guys have a little time to maybe think about your answer. So the question for you respectively is, what would it take for you to identify as a transhumanist? What would it take for you to identify as Mormon? The preceding question is, knowing in advance based on your work what an M society looks like with subsistence wages and so forth, what would be the benefit of choosing to participate in the world? You don't die. I have a very fundamental urge to want to be part of the future, to be part of whatever happens, to influence and to join it and to not give up on it. That's one of the exciting things about being a futurist looking ahead. You say, wow, the future can be big and more capable and have all these things and you'd like to be part of it. And so whatever downsize it has, it's our heritage. If it happens, it's our heritage. It's the thing that goes on and fills the universe with things somewhat like us. And if you want to influence that process and be part of that process, then just like if you were a subsistence farmer a thousand years ago and people told you industry was coming, you say, industry, that looks ugly. I don't like industry. I mean, I think, well, you don't join up with industry. You're kind of on the outside here. You're just not part of the game. So what would it take me to be a transhumanist? I probably just need the discussions. I'm already kind of... We're working on it. Yeah. Use the commitment thing. If you find out, would you really be willing to become a transhumanist and go to work? That might be the hardest part for me is my time constraints at the university are so small or so intense that I'd be a really... You'd see me as a cheater and you'd want to shame me. I don't know the idea of transhumanism. For you, it's fundamentally to the idea of transhumanism because accepting a label is one thing. I know a lot about labels. But the concept, transhumanism, I'm sorry. I don't see a problem offhand. I mean, I don't really... I don't self-identify as a transhumanist. And I think, in part, because I think what's going to happen is going to happen. I'm hopeful that there's good things, but I'm worried that there could be things that I wouldn't want to be... I mean, not that the transhumanists bring it about, but society itself. I mean, already I really hate people walking around with headphones out in nature. It just bugs me. It really does. And I think that's wrong. You shouldn't do that. And so I kind of think that there are probably things coming that with my old brain, my old brain that probably would never be selected to be an M just because it's not very adaptable to those kinds of things. I may resist. My kid have got me gaming, though, for the first time this week. They introduced me to journey, and I'm completely captivated. And I'm starting to say, yeah, this isn't that bad. This is... So... So you're going to be driving first, having an M? Yeah. Yeah. So I want to specialize in the world in being an analyst. That is, I want to say, I know you guys all want to talk about values and argue about morality and things like that, but you need people like me to just figure out the facts as the basis for what you're doing. And I want to specialize in that role. And in that role, I want to focus on saying things with words that have clear meanings. And so I get shy about embracing words and statements whose words I don't really understand, because they have so many different connotations. And so I just want to put a different hat on or something. And when I say those sorts of things, because I don't want to confuse that, because otherwise you see that next to my analysis today, his analysis is fuzzy. Look at this thing he says, how sloppy that is. So for me, even the word transhumanism, but also the word Mormon, I go, what does that mean to people exactly? If it had a clear... If it was clearly just some sort of vague association with a cultural heritage or something, I'd say, fine. I'm happy with embracing the cultural heritage of Mormonism. They look like great people. They produce great societies. I'm happy to get along with them, things like that. I like their families, et cetera. There's lots of things I like about the culture, which is great if I could just embrace the culture. But by Mormon, you mean, oh, you believe these scriptures, and you are devoted to obeying this church. I'm Mormon, and I don't conform to all that. Well, that's also for transhumanism. Transhumanism also has some associations. I go, well, some of the associations are like, and some are less clear. And I'd rather just say words than I can say clearly what they mean and just... Not comfortable in it, you and me. Yeah, exactly. Sure. Okay, I'm going to have to cut off the question. I'm going to have to set up an agency. I know, right? I can't. We're going to have to cut off questions for now because dinner's ready in the other room. But everyone, please give a big round of applause to our keynote speakers. Thank you. And give a round of applause to the people that organized this. This has been amazing. I feel so honored to have been invited here. This has been amazing.