 If you had an algorithm that could give you near perfect health would you be willing to say yes to this protocol? Anything that happens in the world of science and wellness is instantaneously plugged into my algorithm. What blueprint demonstrated the mind is dead. Zero's principle thinking is discovering things that don't exist. How many zero's principle discoveries is artificial intelligence going to introduce to society? That's why it's going to remap society beyond anything we can imagine. Divorce ourselves from all human norms, all human customs, all human expectations. We have no idea what we want. We have no idea what we will want. We have no idea what will make us happy. It's ridiculous for us to presuppose we know anything in the future. What do we do then? Dear God, that was intense. Okay, let's let's lighten it up in here. Okay, you can come out from under your coffee tables. Shake it off. Welcome back to Lifespan News. I'm your host, Emma Short. Recently, I attended the HealthSpan Summit in Los Angeles where I took that video of Brian Johnson speaking. And in this video, I'm going to show you the highlights from that interview and give you my commentary along the way. If you'd like to watch the entire interview on Cut, I believe it's going to be posted at the HealthSpan Summit's YouTube channel, which I will link down below when it's available. For those of you who don't know, Brian is the founder and CEO of a company called Blueprint, which aims to optimize health and longevity through algorithmic precision. He was also the founder of Braintree, a payments infrastructure company. He sold to PayPal for a cool 800 million, of which he took home 300 million. So this guy is someone who knows a thing or two about building high level systems and disrupting incumbents. And his latest target for disruption is none other than this guy. Oh, no, no, no, don't feel sorry for him. He's putting on his puppy dog face. He's already dead. Don't let him manipulate you with those sad eyes. Anyway, Brian Johnson is spending a lot of money and grabbing a lot of headlines by using himself as the guinea pig in chief at Blueprint. He's been called the most measured human, and this slide is straight out of the Blueprint slide deck from their website with their stated goals to make rejuvenation a professional sport and to make rejuvenation the new standard of care, which I personally think is awesome. And I wish there were more people willing to spend this much money and willing to put their own health on the line, frankly, so that the rest of us can learn from their mistakes and successes. So I want to make it clear. I think what Brian Johnson is doing is commendable, no matter how indulgent it may be. I think it's going to end up helping humanity. So good for you, Brian. That being said, I kind of have a bone to pick with his worldview in this interview. You'll see he makes a lot of strong arguments about AI, but in my view, they're not in line with what people really want. His vision for the future is aspirationally askew, in my opinion. While the idea of conquering death and optimizing health and using technology to achieve that is an admirable goal, his prescriptions for how to achieve it are one minute naive and the next kind of chilling. As the video goes on, you'll see what I'm talking about. First up, here's Brian Johnson candidly talking about blueprint. It's the most substantial revolution that's happening in the world right now is algorithms are getting better at doing the things that we care about in every domain of our lives. It just surpassed our own well-being. So in the 25th century, what if they said the thing that humans changed in the 21st century, that changed the course of humanity was they figured out how to attach their rate of adaptation to the speed of technology and science, that single thing they changed. And so that's what blueprint did is blueprint attached me to the speed of science and technology and health and wellness. Anything that happens in the world of science and wellness is instantaneously plugged into my algorithm. It's easier to do this protocol than it is to use my mind. I have demonstrated in myself that an algorithm is better at taking care of me than I can myself. And so if we just think about this, so when the first message was sent with a telegraph, the pony express was dead. The first time we had digital navigation, paper maps on the lap were dead. We know this is true blueprint demonstrated the mind is dead. Sorry, the mind is dead. And the goal is to give complete control of your well-being over to an algorithm. You know who says stuff like that? Algorithms. He sounds like he's doing PR for Skynet. And I'm not even saying he's wrong. It just all sounds so extreme. And it is, I guess. And that's the point, right? The cure for aging is probably going to take some extreme measures. And the future of intelligent machines is here. It's upon us. So we're all going to have to figure out how much control we want to give over to our new algorithmic offspring. So I hold these blueprint brunches at my house, 12 people. And I pose this thought experiment. If you had an algorithm that could take, give you near perfect health, mental and physical health, best of health of your life, would you be willing to say yes to this protocol? Now in exchange, you'd have to adopt what it's going to suggest to eat. And what time you go to bed, you'd have to follow the protocol, would you do it? And then we have a two and a half hour discussion about this. And it's wild. I mean, I've done, I don't know, a dozen of these or so. But people have multiple existential crises. It's amazing. Like, they dip existential crisis, come back up for air, dip again, come back up for air. It's just full on people recognizing that with reality, just breaks their brain in every possible way. Because people will say, like, if I can't choose what I'm going to eat, if I can't have that liberty to do it, I don't know why I exist. To me, that's what really is happening right now. And that's the revolution of foot is that there's a remap, there's a remaking of our of being human. And it's more revolutionary than we think. I gotta be honest, this does not sound like the cure for aging that I always imagined. I figured it would be like a pill or a series of injections or just a jacuzzi filled with stem cells or maybe even a brain transplant into a new cyborg body that lets me continue to do whatever I want, abuse my body, eat unhealthy things and withstand all the stresses we put ourselves through. So we can keep living our awesome lives, not some sort of black mirror algorithm micromanaging my every move. But I guess the best way to think about blueprint is it's not the final form of anti aging. It's not the final system for how to best live your life. It's kind of in this early adopter phase where the goal is to first not die so that we can live long enough for the easy pop a pill aging cure. That's cool. So what's interesting though is that this doesn't seem to be the message Brian's trying to get across. Listen to him answer the question. How do we stay human in a post human world? How do humans stay human in a post human world? I would describe this as a concept that called zero with principle thinking. So it was reasonable 20 years ago if somebody would go to school they would identify a profession, get trained in that profession and then expect to be in that profession their entire life. Totally reasonable. Now we can't predict two or three years out what our professions are going to be. Therefore the future is moving from a first principles thinking way of thinking where we can model out to a zero principle thinking which is we can't model out. The way you think about zero principle thinking is talent hits the target no one else can. Genius hits the target no one can see. Most of the time in life we're given targets we can see become this thing or do that thing and we play games of being ranked. Zero principle thinking is discovering things that don't exist. And so if you want to look at a zero principle type event when AlphaGo played Lisa Dole in Go so you take AI you introduce it to a game of Go human genius has played Go for thousands of years. So a lot of human genius a lot of talent Go enters the game within a few hours time it breaks the game of Go and I would say it introduced like three or four zeroes principle moves in that series of matches. So how how many zeroes principle discoveries is artificial intelligence going to introduce a society when it really starts immersing itself everywhere. And that's why it's going to remap society beyond anything we can imagine. And to me that's why Gen Zero would say we are willing and open to divorce ourselves from all human norms all human customs all human expectations we have no idea what we want we have no idea what we will want we have no idea what will make us happy we have no idea there's it's it's ridiculous for us to presuppose we know anything going into this future. And so then the case of what do we do then then the answer is we the most prize attribute we have is to be adaptable to change the only thing we care about. This makes sense in a world where we are pets of super intelligent AI. So I'm not going to argue with the logic. I mean if that's how you think it's going to go but I got to be honest I don't think that's the answer most people are looking for. Hey you know all those things you want in life those things that you care about toss them out the window welcome to the future. I just I think people want technology to adapt to them not the other way around. If we create some runaway technology that reshapes society and some exponential chain reaction where we end up being second-class citizens at best then I think we're doing the future wrong like maybe I'm splitting hairs here because obviously adaptability is a great quality and we're going to need that but I don't think we should go down without a fight. This is just such a weird way to think like if your favorite ice cream is chocolate but the algorithm says vanilla is better for you so now your favorite ice cream is vanilla you're not going to be like oh I guess my favorite ice cream is vanilla like the goal would be to re-engineer chocolate ice cream to be healthier for you not just to change my mind. Brian Johnson's vision of the future is like being in a really abusive relationship like his solution to the AGI alignment problem is to just embrace Stockholm syndrome but not just that he has another suggestion. I think we have a backwards we are freaking out about AI. I think we should be freaking out about humans. We had an opportunity before when we developed nukes we said all right we now we have these capabilities of annihilating the human race. I think we should um with more vigor than we ever have approached anything attempt to achieve non-violence to the species as fast as possible. Sorry just it's just the easiest applause break ever. Oh yeah you think the world would be better if we stopped killing each other. Someone give this guy another 300 million I'm sorry okay let's let's just move on to the next clip. There's a thousand questions to be asked what about nation states what about neighborhood feuds what about right like all these things yes agreed we are the creators of AI AI is going to mirror us when we saw uh when Kevin Rose wrote that piece in the New York Times about AI being scary it was humanity seeing itself in the mirror and saying oh my god this is scary. This is ugly but it's not the thing that's ugly it's it's the creator because now we have the god like capacity right it's in our own image. It was lost on us like no one realized like no one discussed like this is us we're seeing in the mirror it wasn't AI it was us. Okay AI mimicking humans is a problem but it's a small problem compared to the AI alignment problem which is a rogue AI developing its own agenda that doesn't keep humanity safe and in a position of control this is something that is overlooked to put it nicely in the way Brian discusses the future of AI. I'll leave it at that for now here's the rest. Rule number one don't die so if we're going after don't die then we want to figure out how to achieve non-violence at every scale within ourselves between each other uh with AI and with the plant we have to achieve this non-violence harmony across everything every every agent of intelligence trillions and trillions of agents of intelligence we have to achieve this. We're not going to be able to control AI's development we're not going to be able to know where it's going to go we can't model we can't predict that the only thing we can control is the level of compassion harmony and goal alignment we have and so to me that's this is a all-out battle for the for the for conscious existence for this is an all-out battle for our existence and our best our highest likelihood of success is trying to achieve this non-violent state. Non-violence is a goal amazing give the man a Nobel Prize but a super intelligent AI might get it in its head that a non-violent human species is best achieved by putting us all in street jackets contrast it with somebody like Elon Musk who's focused on trying to build a truth-seeking AI and attempting to design AGI that adapts to our needs and goals and desires and whose contingency plan is at least connecting our brains in some sort of super intelligent hive mind through neural link so that humanity's intention can still shape decision-making about our future crazy still but at least the goal is to keep humanity in the driver's seat then you have brian johnson whose vision is to give all control to the algorithm it knows how to take better care of you than you do so remove any preference or conception of wants and desires because the algorithm is going to figure out all that for you and the best chance we have of surviving this AI future is just to be docile and acquiesce and bow down and kowtow to the higher form of life i think between the two of those philosophies you could see which one is i would say just more fun to think about so let's strive for a version of the future that's fun where the algorithms do what we want them to do and where our goals are taken into consideration above what an algorithm thinks our goals should be and if you think i've taken brian's words out of context maybe i've re-edited this to fit my narrative and he couldn't really mean any of this i would just say listen to his closing statements unedited i think it's really possible that the mind is dead i don't know if it matters what we think i don't know if it matters what we what we want i don't know if it matters we are accustomed to being alpha as a species we superimpose ourselves on all things we now have a form of intelligence that we've grossly underestimated that is moving at speeds that we can't even comprehend and we're stuck on ourselves and we have not reconciled with what is really happening here and i think it might be the most significant revolution in the history of the species where when the mind is dead we need to remake ourselves in a way that adapts to circumstances so that we can survive okay i just want to end by saying i'm not up here saying brian johnson is wrong in fact things could very well turn out exactly like he's describing i'm simply saying i think it's the wrong target to be aiming at i mean yeah we should all stop killing each other of course but you know that goes without saying i want to know what you think do you think i'm crazy am i being too hard on the guy do you agree put it all down there in the comments and for more great content make sure you subscribe to the channel and check out lifespan i o for great articles and information on longevity research like and share this video thanks for watching lifespan news i'll see you in the next one