 Hi, thanks so much for having me. I'm Julia Galeff. I am the co-host of the rationally speaking podcast, sponsored by the New York City Skeptics. And it's my third year at town. I'm really excited to be back. Thank you. My bio in the program is a little bit out of date. I moved up to Berkeley, California four months ago and I'm now the president and co-founder of a new organization called the Center for Applied Rationality, or CIFAR for short. We're developing a curricula, workshops and online material, teaching people not only about the science of rationality, what cognitive science knows about how humans reason and make decisions, but also training people to notice and correct for cognitive biases in their everyday life. So I'll be tabling for the rest of the weekend, starting this afternoon, please come by and chat with me. But now we're gonna be talking about a set of exciting possible future technologies, all of which are either cutting edge or speculative or quack, depending on who you ask. And so we're gonna be discussing and debating the extent to which optimism is warranted about those technologies. And we're gonna touch on a number of things, including artificial intelligence, especially your artificial general intelligence. We're gonna talk about nanotechnology, so the manipulation of matter on the atomic or molecular scale. And also about biotechnology and its applications to dramatically enhancing our cognitive abilities and possibly dramatically extending our lives. That's a lot to cover. But briefly, before we start, I just wanna say that I'm really pleased that there are panels like this one at skeptic conferences in general, for two reasons. First, because I think that the task of predicting, oh, hi, Michael, good to see you. Good to see you. The task of predicting the future or just generally figuring out what to expect in terms of the development of various technologies is a really tricky one, and it's a satisfyingly challenging question to cut our skeptical teeth on. So even though I love a discussion of astrology as much as the next girl at a skeptic conference, I also really relish the chance to tackle tricky questions without obvious answers about what kinds of evidence is relevant when trying to predict the future. Should you be listening to the expert consensus or are the experts in that field biased? Should you be looking at past predictions and how successful they were? Should you be trying to extrapolate trends into the future and when is it legitimate to do that? These are really tricky questions and really interesting ones. And then the second and the main reason that I'm glad that there are panels like this occurring at skeptic conferences is that I think that predicting what technologies are likely to pan out and how significant their impact is gonna be, positive or negative is one of the most high impact things that we can use our skeptic and critical thinking tools to do. Human capital, money, time are some of our most valuable resources. And if we don't think critically and skeptically about where we should be directing those valuable resources, we're missing out on a huge opportunity. And then the last thing that I will say before jumping into the panel is that in the interest of full disclosure, because one of our panelists is on the board of the Singularity Institute, I should mention that the Singularity Institute has been providing seed funding to my new organization. But I will also note that that fact has not, in any way prevented me from having very frequent and spirited arguments with the people at the Singularity Institute, which I'm sure any of them will attest to if you talk to them. The rest of us will be extra mean to Michaels. Oh yes, and we, so we have two Stevens and two Michaels on our panel today. So for the sake of clarity, this is gonna be Steven and this Steve and that's gonna be Vassar and that Michael. We got that. Okay, let's jump in. I think I'd like to start off by discussing variety of technologies in the realm of life extension. You might have heard the expression from some more techno optimists in the realm of life extension technology that death is just an engineering problem. There's no reason that we need to die when we do or really ever if we can just figure out why death occurs and correct for it. So I think I'll toss my first question over at Steve. Steve, you write an amazing blog and are one of the leaders in evidence-based medicine. Do you have any sense of how promising any of the various life extension technologies or approaches are at this point? My short answer would be not vary in terms of are we close to significantly extending life span? I do want to distinguish lifespan from life expectancy. Life expectancy is statistically how likely you are. Lifespan is what's the ultimate limit of our lifetime. So while life expectancy has been increasing over the last couple of hundred years, lifespan is about the same since caveman time, humans are humans. So we really are talking about making fundamental differences to our biology before we can really increase lifespan. And there's a lot of speculative ways about how we might do that. My sense is that we really just don't know. We don't have enough information right now to know which one of these things we're gonna pan out about the best we've done is extreme caloric restriction definitely makes mice and worms live longer. But we don't really know how that applies to people. Again, the theme here is that we're horrifically complicated. We don't know how many layers of depth of information that we're still missing. We often don't know, we know what correlates with aging, but we don't know what are markers for aging versus quote unquote, causing aging. And we don't know what would happen. For example, if we extend telomeres, will that really make us live longer? Is that just something that happens to happen as we get older? Can you explain what telomeres are? Telomeres are the caps at the ends of chromosomes and they get progressively shorter as we get older. The cells try to rejuvenate them, but eventually they get clipped. And that could be an ultimate limitation on how long that cell could live. But at the other end, to be all right, so the optimistic end of the spectrum is there are creatures out there, not humans, but there are creatures that are essentially immortal. Cell lines can be immortal. There is no reason why that can't happen. So I don't see any ultimate reason why we can't get there. I just think we have no idea what it's gonna take to get us from here to there. And what about specifically the experiments that have been done on mice that have enabled mice to live a significant percentage longer than they naturally would? How much do you think that we can extrapolate from that about human perspective through life extension? Again, it's a massive unknown. We obviously use mice a lot in research. We have mouse models for any disease we can make a mouse model for. And the ability of those models to predict what happens in people is very problematic and very mixed. So you ultimately never really know how good your model is until you try the same thing in people. So we hope that we're learning things that will apply to people, but we honestly, we're not, and there's some reason to think that maybe there is some application, but then others research says, well, but they're different in this way and that may be a deal killer in terms of applicability. So it's an open question, how good the model is for human longevity. Is anyone on the panel significantly more or less optimistic than Steve about life extension? I would say that I'm maybe more nuanced and optimistic than Steve in terms of this because when Steve says it's an unknown, whether things are going to come out of research on mice or fruit flies or nematodes or whatever they happen to be researching in a particular lab, that is true. But scientists and especially engineers have developed ways of pontifying and thinking about unknowns in a rigorous manner. Now there are always structural uncertainties in models. We can't be perfectly precise about it. And Todd Leib, Nassim Talib, is very famously complained that we underestimate the size and frequency of black swans, very extreme deviations from model performance. But if we know that we underestimate black swans and when we're thinking about black swans, that's a reason to try to pay more attention to them. It kind of sounds to me like the implicit assumption behind saying that these things are simply unknown is that if things are not known, you should treat them as if they will not change over any given period. But if, while technological progress over the next century say might be highly unknown, treating it, when thinking about the year 2100, it would seem fairly foolish to me to model it with the same model that you would model 2020 with because while one can predict through extrapolation and looking at the development pathway, roughly what'll be around in 2020, you can't predict what would be around in 2100, but you can still be pretty sure that you need some way of representing in your model, the fact that almost for sure, things are going to be a lot more different in 100 years than in 80. I think fundamental to the question is what the definition of death is. So that's not something that's actually agreed on very well either. So I mean, clinically death can be stopping the heart or stopping a breathing, but we've all heard stories about people who drown and are frozen or very chilled and they live extraordinary periods of time in that state and can be resuscitated. When I'm looking at the brain of an animal at a microscopic level, what seems to correlate best with irreversible death is actually the synapses break between neurons. When you're recording from a neuron and seeing its connections between another neuron, what happens when the animal dies? When you know best that the animal's dead, even if their heart's still beating, is that the actual spines that connect one neuron to the other, they disconnect. And so one of the things that's my view of how this is happening, that it's not a very good definition of what it means to be irreversibly dead. And so we need to really focus on that question as well if we're gonna understand what it means to extend life and what it is we're actually trying to accomplish, which is, I think, keeping that connectome together and information running through it seems to be fundamental to what life means for people. I'd like to expand briefly on what Vassar started to allude to, which is this question of, what is your sort of default assumption about what you should expect from technology if you're very uncertain about how to solve a particular problem like death? And so what Vassar was saying is that we tend to underestimate the size of progression, technological progression. Certainly long term. Certainly long term, yes. We underestimate likes ones, extreme, we underestimate the number and severity of extreme deviations from the normal distribution. Right, and so I'd like to juxtapose that claim against the claim that a number of other skeptics, including Michael Schirmer, have made about future predictions tending to be wildly over-optimistic. Do Vassar or Michael, do either of you see a contradiction between those two claims? How would you resolve them? Well, when it comes to ending death, I'm for it. Absolutely. I think you guys are doing great, keep working on it. If you get there, let me know. The over-optimism part I think comes from, I guess my general sense of having studied apocalyptic prophets and so on is that the prophet always writes himself into the story as it's our generation and we're the ones that are gonna do it. And they're over a lot so far. So yes, of course it'd be great if we can make it to 2050 and then you live forever and all that. I just think it's more like, I don't think it's not impossible. I think you guys are probably on the right track, engineering problem. It's probably more like, you know, Star Trek time, 2330 or 2530 or something like that. Unfortunately, I think I'm not even on the cusp having been born in 54. You told me I'm right on the cusp, I might make it. So I've been working out extra hard. I went on a low calorie diet once and it started at breakfast and it ended at dinner. And no, I have done low calorie diets. They're really hard to do. You're just in a constant state of hunger and it's not pleasant. And so even if you could scale it out to the end of life where you get an extra six months or a year, I'm not sure it's worth the 30, 40 years of constantly being hungry, you know, all things in moderation is what I say on that. And also having been a bike racer for 30 years now, I've always followed the diets and nutrition, the latest whatever on supplements and all that stuff and it's constantly changing. And that's discouraging to me. It's like the science should be somewhat progressive. Like we know for sure this works and so on and it's always changing. It's always something completely different from before which tells me they're probably not on the right track. And so you just go back to that moderation. So although I'm in favor, sort of the optimistic side of me likes rakers while in the singular pterons and I think it's great. You guys are doing this and it's mostly private money. Anyway, Peter Diamandis is putting up X prizes. This is fabulous. And the breakthroughs will come through. If they do, this is great. But let's be realistic. And don't forget to live now because 2030 may not happen for us. I will toast that at the bacon party later today. Yeah, the bacon party. Yeah, there you go. Yeah, bacon and donuts. Also just very quickly to interject because I suspect many people in the audience might not know this. The Singularity University which is run by Ray Kurzweil is not connected to the Singularity Institute which Michael Vassar serves on the board of. And they're not necessarily aligned in the things that they predict about the future and technology and AI. So just to clarify, they're two distinct things. Oh, the other thing, at the Singularity Summit I spoke at last fall, when Ray Kurzweil got up and gave his, I have a dream speech, we're this close to downloading brains and constructing a computer that's human level and so on. And then the next speaker was Christoph Koch from Caltech who's a neuroscientist and he puts up on the screen the wiring diagram for C. Elegans flatworm. It's 302 neurons. He says we know every neuron and every connection and we still have no idea how this thing operates. And you're talking about 100 billion neurons. We're not that, you know, it's like, we're a long ways from that I think. Well, I certainly don't think that we're very close. I know people who are currently mapping out the input diagrams for the neurons in C. Elegans. I'm, you know, tracking that. There's funding for different optogenetics techniques for extracting that information. I think I have a pretty clear idea of how you can extrapolate things. Assuming another 10 years of Moore's Law, another 20, another 30, another 40. Assuming similar improvements algorithmically. Vassar, can you just explain what Moore's Law is in case people don't know? There's a general long-term trend where computing power doubles every roughly 12 to 24 months. That's a big deal in the long term because rapid exponential growth leads to really large long-term changes. So computers today are roughly a million times more powerful in terms of floating point operations per second than computers 30 years ago. That sort of extrapolation you can put back another 30 years. Putting it forward another 20 years doesn't seem that implausible and that gives you a lot of details about what you shouldn't be able to do. A lot of our thinking comes from things like that. It's not really wild-eyed speculation. It's based on what do people have to be predicting implicitly for the economic calculations to make any sense. Assuming that things do not suddenly all freeze technologically but continue something like the last 20 years, what does that look like? And then what technological capacities become feasible at what new levels of computing power, et cetera. So if you were going to map out an emitter brain with today's computing power, how long do you need for a human brain? If the nematode is 300 neurons versus 30 billion, is 1,100 million the size of the human brain? Well, how valuable is a simulated human brain? Can you compress things further once you've studied it for a while? You know, there are fairly obvious questions that you can work out and then you can try to work out where the answers probably are. But it's not as easy as just saying a nematode that has 302 neurons and humans have several orders of magnitude more so we just have to extrapolate several orders of magnitude. A nematode is visible. So in the case of transistors and Moore's law, from the very beginning when they invented the transistor they knew what they were trying to do to make a circuit. So we understood what we were trying to do 30 years ago to get where we are now and it was a matter of the technology for putting more and more transistors onto a chip and so there was no mystery as to what we wanna do. With, in terms of discovering how the brain leads to consciousness that you could then download, we have no idea how you can take a bunch of inanimate neurons and hook them up into a circuit that then becomes conscious. So it's a completely different problem and even then with nematodes we have the microscopy methods to see the entire nematode at the same time so that we can determine those connections. But that's it, we can't just take that same technology and apply it to humans because we can only see with microscopy neurons down to about one millimeter into the brain. So with a C. elegans that's fine, but with a human it's basically there is no technology that can accomplish what needs to be done to solve even the connections between the neurons that we see in C. elegans. So it's a much bigger different problem where a lot more different things have to be discovered before we can move forward in that. Steven, your lab focuses on neurophysiology so the neural correlates, like what is actually precisely happening in the brain when you focus your vision on a particular object or perceive color and so on. That seems, correct me if I'm wrong, but that seems to be an area in which we have achieved a lot of understanding of what's going on, what the brain is doing. How much do you think that we can extrapolate from that into sort of higher order processing? Are there similar algorithms that are being run at different levels or do we just have no idea? Yes, so we know that different parts of the brain, especially the cortex of the brain have very similar circuits. And so when we study a circuit in the visual cortex and we understand how the neurons connect together to process a certain type of perception, we more or less can extrapolate to how that would work in audition. It's just different types of inputs from a different sensory organ. And in fact, experiments have shown you can take one piece of cortex and swap it with the other from completely different parts of the brain and the animal can actually perceive the correct things. With a piece of auditory cortex put into the visual part of the brain or the piece of auditory brain put into the visual part of the brain, that animal can actually see with it. And so these experiments are happening and the circuits are more or less the same. So yes, we can find out what the circuits are and extrapolate to other parts of the brain, including cognition, because cognition is processed with the same type of circuits in cortex, which looks the same everywhere. So yes, we can do that. I believe that a lot of Ray Kurzweil's optimism about when we will understand human consciousness, cognition well enough to replicate it on a computer comes from this assumption that there's just a few algorithms that the brain is running at all these different levels so that once we've understood them on the lower level we can just extrapolate up. But Kurzweil has a prediction of within, I think the next 20 to 25 years about understanding human cognition well enough to build it. If you had to make your own prediction, what would you expect to see? I agree with Kurzweil in the sense that a lot of what we are and a lot of our behavior and perception and experience of the world is built up from fairly simple algorithms. I mean, there's only 20,000 genes in the genome. It's unreasonable to think there's more than 20,000 circuits in the brain. So that means that when I was a graduate student the sky was the limit. There were millions of circuits in the brain. We had no idea how many. Now we know there really can't be more than 20,000 and it's presumably much fewer than that because some of those genes are needed for what shape your teeth are and how tall you are and things like that that you've got definitely 1,300 genes in your body dedicated to your olfactory receptors and nothing else. So there's many fewer circuits than we originally thought and it could be in the hundreds or maybe even less and everything has to be built out of that. So that's true. But then taking that knowledge about how you could build a brain and saying that I'm gonna take your brain and I'm going to replicate, I'm gonna find out all those connections and maintain the unis of you and put that in a computer and simulate it. That's a completely different issue. You have to be able to detect all the connections in your brain that makes you different from you and that's not something we have the knowledge of how to do right now. So we should distinguish between building a human level general intelligence in a computer from uploading, which is taking a current consciousness, one of you and putting it onto a computer such that personal identity is maintained for whatever definition you're using. I think there's a problem with that in itself, the personal identity problem. So let's... So you do have a problem? I think there's a problem there. There's a problem. Yeah, unless I've misunderstood this, here's how it goes. Let's say we now have the capacity to clone your body and upload your mind onto a computer and you do the hard drive upload every backup every week. Like you're supposed to do with your time machine on your Apple, remind you, don't forget to upload. Back up your hard drive. Okay, so you do this, you go on a trip and the plane goes down, the work gets back and your spouse has died in this plane crash. No problem, your spouse calls the cloning thing and they rebuild the body and then they call the backup hard drive company and they upload that and then you go home. And there you are, but there's been a mistake. The plane did not go down, the pilot pulled it out the last minute and so you go home and there's somebody else in bed with your spouse, you're going, hey, wait a minute, that's not me. That's just a copy of me. So who's the real you? I don't see how it can possibly be you. You would not consciously be thinking, hey, I'm still alive, this is me, I'm still here in the computer. You would not be, either that or there's more than one of you thinking that those exact same thoughts. I don't see how without a continuity of just you and your own brain continuously conscious how being dead and then waking up again in a computer, I don't think that can happen unless I'm just reading this. That's the continuity problem. I don't know of any solution to that. I don't either. There's a panel talking about that. But just to put my two cents on that issue, there's a few different ways that we can reproduce a brain with artificial intelligence or whatever and the processes I see are happening in parallel with researchers. One is sort of the top down method of understanding the circuits and then reproducing them. I think we're farthest away from that because the complexity is incredible and even if the number of unit circuits isn't that good, there's different when you combine them together, then you multiply the complexity and we don't know what we don't know. So there's still enough going on there. We know a lot, but we know enough to know that what we don't know is still vast and unknown. So that, I don't know that we could predict when that's gonna happen. I think that's probably gonna happen last of all the ways to reproduce a neural circuit or an artificial brain. Another way to do that is to rather than trying to understand everything top down is just to reproduce it virtually or in a computer, just map out the circuits and put them together and see what happens. That is happening at a reasonable pace and it's always hard to extrapolate. You can't extrapolate linearly. You get into problems there. But there's no question I think that computers are gonna be powerful enough in 30 or 40 years and we're making a lot of progress in just modeling the circuits even if we don't understand them exactly. It's possible we'll be able to create a virtual or an artificial brain that we don't understand and that will function. It seems that when we do create a virtual circuit that mimics a circuit in a rat or a mouse or whatever, it functions. It does what it's supposed to do. It's an actual working circuit. So I think that when we actually make a virtual brain, it'll function and may even be conscious which is really weird to think about. And then the third way is to grow it. So we don't necessarily need to know how to put all the pieces together. We just need to know the developmental algorithms that will make all the pieces assemble into the final product. Actually I do disagree with Stephen a little bit about the number of circuits in the brain because my understanding was always that yes, there's not that many genes ultimately that control the development of the brain but there could be lots more complexity in that that comes into the picture in the developmental process. So the genes are not a cookbook or a blueprint rather, it's just an algorithm for how the brain will develop or unfold and you get increasing complexity in that process. So there actually may be more circuits than genes. That was always my understanding. You disagree with that? Yeah, there can't be more circuits than genes but there can be many different orders of magnitude of slightly different circuits and different people. Permutations are absolutely. So that definitely can be true but I don't think there can be more circuits than genes based on the problem. But those are the three things we're talking about and they happen in parallel and they're feeding off each other too and it's cool and unpredictable at what's gonna happen. And what do you think about the argument that we have this proof of concept that it's possible to build a brain as intelligent as a human because here we are and it took evolution X number of years to produce the human brain and we can iterate at a much faster rate than evolution especially as computers get more and more powerful. So that should give us significant optimism about our ability to hit on the right strategy to building a human level intelligent brain in the near future. What do you think of the logic of that? I'm tossing this out there to end. Yeah, absolutely. I think once there's no magical limit about human level intelligence it's just our level of intelligence that we're obsessed with it and it's what we know. But if we can get to human level intelligence there's no reason why we can't blow right on past human level intelligence and obviously it's gonna happen much faster than evolution because technological evolution happens at orders of magnitude more quickly. It's a different process. So again, I don't know of any argument or reason why that's not going to happen. We're going to develop super human intelligence. Once we get in the groove, so that's the other thing when you talk about extrapolating technology to the future and I think Moore's law is a great example of this and this is another thing that's the predictability's unpredictable in that we find ourselves, we hit upon an idea like the circuit and the integrated circuit and then that idea has this amazing potential and that we run with that for decades or whatever and once we're in that groove we can predict how that line of technology is going to develop and as long as Moore's law holds we can pretty much know what's going to happen. But we can't predict when we're gonna run up against roadblocks and sometimes you can see them coming because of physics but there may be other things that come into play and you can't predict when you're gonna get into that groove. In other words, there are game changers and that's always the fly in the ointment of predicting the future as we can sort of extrapolate the grooves that we're in, what we know but there's always these game changers, these black swans or whatever that are completely changing the rules on us and that's why people 50 years ago, nobody, all the people that were thinking about the future, about this year, this era that we're living in, nobody got it right. Nobody was close to the kind of things that we're doing today. All the things that they thought we would have, we don't and all the things that are really cool and technical that we do have were completely not even on the radar of futurists from 50 or 60 years ago. Even people that spent a lot of time thinking about it and I think we just have to assume that that's gonna be pretty much true. Once you get beyond like the five to 10 years of playing out current technology, you don't know what game changers are gonna completely alter the equations so you just don't know. So the very fact that we're talking about AI now means it's probably not gonna happen? Well I think broad brushstrokes, I think, yeah so the really broad brushstrokes I think you can say, you could say what's likely and what's not likely. When you get down to like applications, how people are gonna interact with technology, exactly how it's gonna play in our civilization, that's very, very, I think, hard to predict. So and when you say artificial intelligence, we don't know, it may be that we might get to the point where it's a lot easier than we think that 10 years from now somebody's got a model of a brain in the computer and it starts talking to them. Who knows? Or we might get to the point where we're looking back and say why shouldn't we have had artificial intelligence for 50 years, what happened? What are we missing? Why didn't this happen? Why don't we have flying cars? That may be the question 100 years now, why don't we have artificial intelligence? I don't think that's gonna happen, but again, you don't know. We don't know what roadblocks are gonna be there because you don't know what you don't know. Another issue here is that we have to draw a distinction between developing artificial intelligence and replicating human intelligence artificially. So there may be algorithms that can be developed that are just as good or maybe even better than human intelligence that you could do in computers, but in terms of determining the discovery of how humans actually do it, that's a biological question that requires a completely different set of techniques to get at. And it's not just an idea of thinking of the algorithms and implementing them and see if they work and then they kind of help you along and you move forward. There's lots of artificial intelligent circuits that are used in products today that have nothing to do with what the human does. And so the idea that we're gonna replicate ourselves and be able to download our brain into a machine that can then be us is what really I have a no confidence that we're ever gonna be able to do in our lifetime to certainly. And so in terms of developing artificial intelligence where you can actually have a human-like creature that is made completely artificially, that's a different issue. And that's just gonna depend on the discovery of what intelligence is, which again, we're still not clear on, we still don't know most of the circuits in the brain and how they connect to each other. And all of that stuff has to happen from a biological level if we're gonna replicate humans or from a computer science level, if we're just gonna say humans are irrelevant and we just need to solve the problem. I agree, I don't think that the uploading or downloading, what direction does it go to? You upload your brain or download your brain? You have to do the work, right? I don't think that that's ever gonna really be a great idea. There's a continuity problem, which I have a huge problem with as well, and there's the problem of how is it gonna be my brain as opposed to just a brain? But there are, you could imagine other things that we could do that might still kinda get us there. So for example, you could imagine an artificially intelligent brain that is connected to your brain and they symbiotically exist. And then over 10 or 20 years, they essentially become one unified consciousness to the point where the biological component becomes unimportant. And you basically are the artificial brain. So that's a, that kind of thing may solve the continuity problem, may eventually get you to the point where you are a computer brain without having to do the sort of instant upload or download problem. I'd be a little more optimistic if we just could make some incremental steps like Alzheimer's and senility. I mean, we've made practically no progress on this. If neuroscience is doing so great and we're just ready for the black, so how can we can't even solve a sign? I would disagree with that, but it's one, I wouldn't say no progress, but it's the kind of progress that is happening in the background and has it crossed the threshold to a clinical application yet. But if you talk to people who are doing Alzheimer's research, we're learning a ton about what is going on and in brains that are suffering from Alzheimer's disease and including completely new ways of thinking about the problem. Like, oh, it's really a protein folding problem. That's a totally new idea that did not exist five years ago with Alzheimer's research. But you know, translating that into a treatment for whatever cure for Alzheimer's is we don't know when that's going to happen because we don't know, because again, like five years ago, nobody was talking about protein folding. They thought that they were onto the real cause of the problem. Now they're thinking, oh, maybe it's a layer deeper and this is just the manifestation of this deeper problem. Now we know what the real problem is. But of course, it may still just be one layer deeper still, but we're still making progress. You don't know when you're gonna hit the treasure chest, right? You keep digging and digging and digging. You don't really know when you're gonna hit the answer, but you're still making progress. You're still digging. So that I wouldn't characterize it as no progress. I'm sorry, I forgot. Could you repeat that? No. I want that drug from that movie to take it and makes you hyper intelligent and super fast and smart. Limitless. Yeah. So Stephen, you talk about there being a lot of game changers that you're talking to today. So Stephen, you talk about there being a lot of game changers. I'm Steve. Thanks. That make things hard to predict. And certainly game changers do make things hard to predict. But as far as I can tell, aren't all game changers new unexpected capacities? I mean, when we've run into in the past, limitations in the laws of physics that were not expected, they've been silly limitations that are so far away from practical relevance that it's not even funny. Well, I think yes and no. I mean, I think you're thinking of new ideas and new paradigms that do give us, as you say, new capacities, but there's also roadblocks. So let me give you an example. In 1990s, you were talking to researchers that they were saying, yeah, you know, in the next 10 or 20 years, we're gonna basically cure all genetic disease. We're gonna have retroviruses, go in there, swap out the genes. They're gonna all be cured. We haven't cured a single one yet. We're 20 years later, because we ran into technological problems that we have not been able to work our way around. So that made it difficult to predict that. And I think there are limitations and there are technological problems that you can't predict either until you actually try to do it. And so I think you have to think of in terms of new capacities and roadblocks as two types of game changer. Actually, I think that this is just a difference in who we think of as the experts. You always have talking heads who treat current research as if it led immediately to fantastic future applications, whether people are talking about invisibility cloaks as their summary of physics research on optics every time you hear about physics research on optics, to people who talk about how any given research into the structure of the ribosome could cure cancer and the coming cold. But if one tries to do one's futurism by listening to the hype cycle and predicting that every breakthrough that is hyped for 10 years out is going to happen in 10 years? Well, obviously not. But if you try to talk to the experts in the field and get a sense of the deeper issues underneath the hype cycle and then try to build something like an expert consensus among the top experts in the field and then maybe you double the time horizon and have the probability, I find that that works fairly well. So if people say, yeah, it's probably gonna happen. That means it might happen. They translate probably into 60% and then divide by two, 30% take 20 years. I'm just trying to ad lib this based on about 15 years of trying really hard to make sense of past predictions. That's a good rule of thumb, I like that. Double the time and have the prediction. That's probably good, I agree. And we talk about the hype all the time on our show. The exact same examples that you bring up. But with the retrovival thing, we were talking about people who were researching clinical applications. These were the experts in the field. They really thought that this was the answer and it just turned out that there were technical problems. They actually ended up killing the people they were trying to treat. That they then said, okay, well, we'll work our way around that and then they just couldn't figure out how to do that. So it's not just hype. I agree with your assessment of the hype, but there are other real roadblocks that are unexpected that fool even the experts in the field who are not overhyping. Before we go into questions, I just wanna make sure that we have time to touch on nanotechnology. I assume not all of you have expertise in all of these potential technologies, but Baster, I know I've heard you speak before about the potential promise for the field of nanotechnology. What is your sense of the state of the field and its promise? All right, well, nanotechnology can mean two very different things. There's the 1980s vision of Eric Drexler for molecular-based machinery, and then there's the 1990s funded by National Nanotechnology Initiative stuff having to do with controlling the fine-grained structure of matter and enabling things like faster charging and discharging of batteries, more efficient power transmission and energy storage, things like phase array optics for displays. There's all sorts of things that are developed through essentially understanding how properties emerge out of the low level, but not molecular-scale properties and materials and how the molecular-scale properties emerge out of the sequences of fold-immers. This is a term for folding structural proteins, but in addition to both of these visions, the Drexler vision and the professional vision, there are the missed statements and vague popularizations of Drexler's vision. And of course, for his vision, you have his ideas in 1978, which are basically what we call synthetic biology now. His ideas in 1987, which are basically a somewhat more serious version of the cartoon version, and then his ideas in 1997, which are pretty in line to what the expert professionals would think today, except being a little pushing the envelope further in terms of how far out you're trying to look, how far out you're trying to think, and focusing maybe too much on industrial capacity when thinking about a world 30 years out where industrial capacity is unlikely to be an important constraint. What do you think is the most exciting thing that you think it's plausible we will be able to do with nanotechnology, say in the next 60 years? 60 is pretty much forever, and I expect we're talking about fine-grained structural control of matter. So let's talk maybe 10. And when we're talking 10 years, well, the old idea of growing diamond is kind of neat, and that's been moving along exponentially at a nice clip. There might be a lot of useful applications in semiconductors, as structural materials, et cetera, two-diamond. We have better ultrapasseters for rapid charge discharge. That might allow things like really good regenerative breaking, simple modifications to old cars to produce things with way more electric power and efficiency. Nanotechnology applications going a little bit farther out. Actually, let me instruct you very briefly. For the last 10 minutes, I'm gonna take questions. You can start assembling. I don't know how you've been doing it. I think George will take care of that. But we'll get that rolling, sorry, while you finish your answer, Vassar. Anyway, the real important applications are things like carbon nanotubes and self-assembling molecular chemistry solutions that get past the limits of current techniques for microprocessor fabrication. Because for the last 40 years, GDP growth, technological development, the basic logic of the economy have all been deeply dependent on these exponential increases in microprocessor fabrication and we're not going to be able to push current techniques more than 20 years out. That's really kind of pushing it. Hopefully we can develop better ways of building 3D structures by that time and keep the sort of economic engine moving. Great, okay, we have about 10 minutes for questions. Take it away, George. There we go, hey. Sorry about that. Okay, first question. Hi, so two of you mentioned that you had huge problems with the continuity problem. And I'm wondering why, assuming that technology exists, why would that situation that you describe with the plane be different from somebody suddenly feigning or being hit in the head or maybe just even going to sleep and then waking up without having the conscious perception of continuity? No, I think I get that question a lot. Like when you go to sleep, you lose continuity, but there is to add, no, that's not true though, because your brain is still functioning, there is continuity. You're just transitioning from one state to another state but there is still neurological continuity. It's different than making a copy or being destroyed and recreated or whatever. So I don't think there's an analogy there. Even when you're in a deep coma, there's still stuff happening in your brain and there's still continuity. It's just a different state of consciousness that you're in. The only way I think it could work would be if you had, is this working? Yep, no, I don't know. If you had, if you replaced every neuron with a synthetic neuron that was built by nanobots or something like that, such that you never actually lost consciousness and the continuity continued and then you were no longer just electric protein machine, it was silicon or whatever. That's the only way I think the continuity could work. I don't want that one either, because I'm worried that that's just a slow process of replacing your rather than a sudden process but it's still in the end. So I think, I like the idea of the symbiote where you're just, you know, over time it becomes you. So I think that's probably the best solution I could think of, but I've heard that one a lot, you know, the slowly replacing neuron by neuron. Maybe that would work. I've still a little, you know, I'm not happy with that one. For the record, I don't believe there's a continuity problem. I think that when you watch a movie and you're immersed in it and you're seeing this other story or you're dreaming in your sleep that you fundamentally have changed what you're perceiving and that humans are just adapted, changing between one experience and the other. And, you know, while we're daydreaming, some of you right now are daydreaming, and maybe many of you, and then you come back to it and realize, oh, you're here at TAM, and you were just dreaming about your children's birthday party or whatever it was. And that, there's fundamentally no continuity problem. If you change, if you made a computer simulation of what's in my brain right now, that computer would think it was Steve Maknik sitting here in TAM, having a great discussion about the future, even though it's running in China somewhere. And that's what it would think. That's the continuity you would have, and it would have its memories of my entire life, et cetera. So if you made a clone of me and downloaded me into it, that would be that. I don't think that that's, you know, necessarily possible, but that's continuity. So there could be two of you in my thought experiment. In my thought experiment, there would be two of you then. There'd be two of me. One of which was to you. I'm sorry to interrupt, but as Steve said, we could talk about the continuity problem for an entire panel, and I wanna make sure we get a couple more questions. Who's next? So a quick question. Hypothetically speaking of the technology, were there, would any of you be transported, beamed up, broken apart, put back together? I think I would- The technology I did not cover. I'm with Dr. McCoy on that. Don't scramble my atoms and set them all over the universe. I think like most people, I'll probably be conformist and do what's socially acceptable. Yeah. The question is, do you have to go through TSA first? Yeah. I take the shuttle. You take the shuttle, yes. I would take the shuttle. Hello, I just wanted to ask, Steve, you mentioned one of the approaches would be just kind of putting something synthetic together and then seeing what happened. I wondered if you guys could discuss maybe some ethical concerns about that because we do have the worst case scenario in the science fiction like Skynet. I mean, what if it did evolve to a point where we surpassed our own intelligence and then it turned on us, kind of a thing. The inevitable robot uprising. Yeah, yeah. Yeah, actually, Michael Vassar was on my show. We talked for like an hour about this, right? I mean, it's a huge problem. Is it unavoidable and how can we make it not unavoidable? So, I don't think we came to any firm conclusion. So, the answer is to be nice to your robots. Actually, that won't work, but we can try it anyway if you find it fun. So, one solution might be that we will be the robots. We're not gonna make robots or just make robots what we will be, then that's Kurzweil's solution. That's okay because we'll be evolving along with them and that'll be sort of just part of what we become and there won't be this sharp line of distinction of robots and humans. So, that may solve the problem, I don't know. It actually looks like a pretty hard problem. Someone's got to solve it. No one seems to be trying very hard except for a very, very small number of people. Do you think we'll be able to get science fiction style artificial intelligence, those algorithms, without being able to understand how those biological algorithms work in the human brain? Yeah, as I said, I mean, two of the things that can get us there, one is to just duplicate it virtually without necessarily understanding it and the other one is to grow it in some way, which could be like an evolutionary algorithm or a developmental algorithm where we make the algorithm but the end result is something we don't fully understand. So, if we use those methods, we very well may have an artificial intelligence we don't understand. I agree, I don't think there's any reason to think that biological material is special or that the algorithms that humans actually have are necessarily only algorithms or even the best algorithms you could use. I agree with Steve completely. One or two more? I'm wondering your thoughts on what are some of the problems or dangers of this technology and how we might deal with them, particularly relating to this idea of sort of beating death or solving degeneration? Like, do we risk possibly stagnating because we aren't bringing in new minds? Idiocracy, right? Yeah, so what are the ethical problems that will be created by immortality? Be nice if we find out, you know. Let's just do it and then we'll worry about those problems when we get there. We'll talk later. Yeah, we'll talk later. I don't think, it's fun science fiction to speculate about that, the world of immortals, but I don't think we're gonna have to worry about that anytime soon, but it's interesting, I don't have any special insight for you, unfortunately. I mean, if the population were to double every 70 years like it's doing right now, the earth would still have an awful lot of empty space in 300 years. But not 400. No, not 400. We'll do one more, one more. So I had kind of more on the computer side question. You said that we could not evolve current techniques with development of semiconductors and everything 20 years into the future, and I'm thinking it looks more like five years into the future before we can, because optical techniques will run out. So do you think we're going to finally hit the end of Moore's law, we're gonna flatten out and we're gonna have to make a major leap to move forward? Well, I know that people have different standards for what counts as on track, but the standard road maps tend to go about 10 to 15 years out. And the track record has always been that they get extended. So it seems unlikely that they won't be extended at all. One could imagine them phasing out gradually. So at the end of this 10 or 15 year period, they're only able to see pretty clearly five to 10 years out. And then after that another two or three years out and then it's pretty much done. But just looking at these things, they don't stop abruptly because there's all sorts of small incremental improvements. They don't start abruptly, they don't stop abruptly. And the economic logic of it dictates that even if Moore's law were to slow down somewhat, this would simply lead to more capitalization in computers, because computers would then no longer be as rapidly perishable as stock. So that gives you another five to 10 years of buffer. So I think we can have a pretty good idea of looking at about 20 years. And then looking at about 30, it's getting a little bit sketchy. Looking at 40, it's really sketchy. Thanking our panelists.