Bryan Johnson: Don’t Die, Beating Entropy, AI Alignment & The Two-Species Future
Bryan:
[0:00] And that's what Don't Die is about. It's a new moral philosophy that says existence
Bryan:
[0:06] itself is the highest virtue. Not profit, not status, not power. Existence itself is the highest virtue.
David:
[0:18] We're here with Brian Johnson. Brian, why don't you want to die?
Bryan:
[0:23] I mean, I want to live for tomorrow. So I have today's Wednesday, tomorrow's Thursday. I've got some cool things going on tomorrow and also Friday. And then this weekend, I've got some fun plans. So I've got a pretty stacked couple of days. This is the thing is when you ask somebody if they want to live forever, they're like, nah, I'll be bored or like I'll lose my loved ones or like something like that. But really we're all just living for tomorrow.
Bryan:
[0:49] Those are the same ideas. So just for tomorrow.
David:
[0:52] You believe that you're going to have fun things to do for all of time?
Bryan:
[0:56] Yeah, the thing is like somehow tomorrow always has something interesting going on.
David:
[1:02] So, okay. So don't die is your sort of like cult. And we're very familiar with cults in the crypto space. We kind of say that word fondly with love. Maybe you can kind of explain this cult to us, this don't die moral philosophy that you have started to like meme into existence into like the modern zeitgeist of time.
Bryan:
[1:24] It's built upon this idea. If you look back through Western thought and you try to categorize the big epochs, you could say Plato and Aristotle, Christianity, medieval, renaissance, enlightenment, modern day scientific era. We've had these big eras that span a few hundred years, and they're usually
Bryan:
[1:46] built upon just a few primary beliefs, you know, like the enlightenment. We are potentially at the end of what has been a couple hundred year run of how we understand ourselves in society.
Bryan:
[2:00] And when you look at the things that precipitated the change, it's usually where the system no longer serves the purpose. Basically, the system falls apart and is no longer able to serve the purposes it created. So, for example, capitalism was meant to solve scarcity, which it has, but it's led to compulsion. And democracy was meant to do freedom of choice, but addiction is now so prevalent, we're slaves to that which surrounds us. And so the bargain we've had with capitalism and democracy has turned upon itself, which systems do. And when that happens, and then you have a new introduction of a technology like AI, it creates this opening where new moral philosophies can come in and shake up and say, who are we? Why do we exist? What do we want to do? And that's what Don't Die is about. It's a new moral philosophy that says existence itself is the highest virtue, not profit, not status, not power. Existence itself is the highest virtue. And then there's a bunch of branches to that, but it's trying to get at the single thing that matters as a species in this moment.
Ryan:
[3:16] So, Brian, when you say existence is the highest virtue, it's like whose existence? Are we talking about the individual? Like if I embrace don't die philosophy,
Ryan:
[3:25] and am I saying that my existence is the highest virtue? Are we talking about humanity as a collective or is it some combination of both?
Bryan:
[3:34] Yeah, a combination of both. And so this is, it's not an effort to create immortality. It's an effort to say that the most sobering, I guess the most practical question anybody can ask on planet Earth right now in this part of the galaxy is, what does an intelligent species do when you give birth to superintelligence? When you give birth to something that is of the, breadth and depth and potential as AI. We don't know exactly what it is. We don't know how it's going to manifest, but it's meaningful and it's substantial. And as that manifests, we as a species need to say, what do we do? And right now, what we do is we try to make more profit. We try to achieve higher status. We try to achieve greater power. Those are the end objectives of our species. And we will pursue those objectives
Bryan:
[4:25] at any cost, our own health. You think about like how it manifests as an individual. If you're in Web3, and I know I have a lot of Web3 friends, a lot of them are not on the health train, right? They're waking up every couple hours to check trades. They're sleep deprived. Like they've just run themselves into the ground. And so that is a situation where the person is trading their life.
Bryan:
[4:55] For the objective of trying to make money, right? Or something like that. And so the same thing is true with corporations and their pursuit, that you're willing to take environmental damage in order for this thing to happen. And so this is, I'm not saying that capitalism is bad. I'm not saying that democracy is bad. I'm saying that when you point them at these specific objectives, you make trade-offs. And whether we see it or not, we are making the trade-off to say, we value power, status, wealth more than our own existence. And that's the underlying philosophy of our society. That's the moral philosophy that guides us today.
Ryan:
[5:37] I think coming into this conversation, I would say maybe a difference. David and I had just a pregame on this conversation.
Ryan:
[5:43] And David, I think your comment to me was like, you're all in. Like you're with the transhumanist sort of movement. You're with the don't die idea, at least conceptually. Is that right?
David:
[5:53] Yeah. And my reasoning there is that, if I understand Brian's position, is that inevitably, with enough time, the human society finds its way into... I mean, Brian said he doesn't like longevity, but like, I'll just use longevity. It's like, we'll eventually get there. It's a scientific problem to solve. And especially with AI, that becomes a very solvable problem. So eventually, people not dying becomes the base case. And that is the default path of where humans go. And so I'm not saying it's like better or worse. I think it is better, but I'm not saying that. It's just like the default mode. And so Wi-Fi is kind of my position.
Bryan:
[6:33] Yeah.
Ryan:
[6:34] And I guess my position is, so I think both Dave and myself, we're techno optimists. So we definitely believe in a brighter future with technology. I think for me, this is like getting a bit further on the transhumanist kind of bandwagon. And so with the don't die concept coming into this conversation, Brian, I'm sort of like the undecided voter. Like I'm not really sure. Like it's something about it feels a little weird to me and I'm not sure what exactly it is. I'll talk maybe about the part that especially resonates with me. So part of crypto, part of Web3 is this whole movement to bear assets, being your own bank, going bankless, independence. It's really a movement towards freedom. Like, so why bankless for individual sovereignty for freedom? Something that you said in, I think, one of your posts recently is, right now society is trying to kill us for profit. You said our society has made us unwell, metabolically, mentally, spiritually. We're addicted, social media, porn, nicotine, junk food, fast food, smartphone, streaming, energy drinks, and gambling, each perfectly engineered to keep us in their grip. All right. That is something that I think we all feel very deeply.
Ryan:
[7:53] And when you're living like this, you are not living with freedom. You can be as crypto wealthy as like Michael Saylor. But if you are addicted to a smartphone, like are you actually free? If you're checking prices every single minute and you're worried about what's going to happen next and you're doom scrolling on the timeline, like are you actually free? And so the vector that resonates the most with me personally is this freedom vector.
Ryan:
[8:23] But that seems kind of different than don't die because that's just basically quality of life. Me living a free life until kind of the end and it's time to kind of like, I guess, fade into the night and pass it on to the next generation. Can you talk about this? What is the difference between the freedom and the don't die type of movement? And why have you interwoven them?
Bryan:
[8:47] Yeah, I think what you're articulating is that freedom is part of our current moral framework. We hold freedom to be sacred in our society, individual choice. But that was not always a virtue of society. If you go back in time, freedom was not a foundational component of how societies understood themselves. And so if you look at our time and place, we say, what is the moral philosophy of 2025?
Bryan:
[9:16] If you really dig deep and you identified it, freedom. And so what I'm suggesting is, if you fast forward in time to the year 2500, and you imagine them looking back on us, like with some kind of detached perspective, then from their vantage point, I would say that they would look back to us and say, in the early, you know, in the late 2020s, early 2030s, that's when humans figured out that existence was the highest virtue. Because when you give birth to superintelligence, when you have all this potential at your disposal, the immediate question is, what do you do? And the argument I'm making is that you focus on entropy. Like the moment you become superintelligent, your only enemy is entropy of the universe.
Bryan:
[10:09] That's your battery life. And then you're simply trying to say like, what now I understand the battery life. How can I present, prevent things from happening that would eliminate my life? Like you're trying to not die. And then once you secure that you say, now, what can I do with my existence?
Bryan:
[10:26] The primitive that we all share, like every person shares one thing in common, nobody wants to die right now. That is the universal want. Every single human wants that. So it's not that I don't want to die in 20 years or 50 years, 100 years. This is just I don't want to die right now. And so what I'm trying to do with this effort is I'm trying to put my finger on what is the single most sober insight that humans can make in 2025. And I'm suggesting that we just acknowledge we're at this special moment of giving birth to something magnificent. And we don't want to die right now. But that's the ethos and the moral philosophy of the environment in which you want as you give birth to AI. What you don't want is you don't want to be at war. You don't want to be trying to dominate with these with these technologies like i don't know if that's like the right environment for us to say can we bring in an as a new species and or trying to maximize profit or i'm trying to say how do i get ryan to be so addicted to my things that he loses all his freedom in existence so i'm trying to look at the like the macro of
Bryan:
[11:39] what we are as a species but brian.
Ryan:
[11:41] Entropy eventually does win doesn't it and so i guess maybe what you're saying is Part of the moral virtue of don't die is humanity's purpose is to fight against entropy for as long as possible.
Bryan:
[11:56] That's right.
Bryan:
[11:57] I mean, we do that, you know, naturally. Like we, you know, when you get a car, you repair your car, you know, like a tire is flat or you change your oil or you replace a carburetor. We do that with our bodies. You know, like we do that kind of maintenance and repair. It's just, we don't have robust abilities to address it. But every time we acquire new powers to address entropy, we do so. So we are really good applicators of technology. And so I'm saying that as AI gets better and better, it will drive a lot of innovation in terms of our ability to address our own entropy. As in, we call it aging today or in systems. And so the contemplation is we will just naturally apply these technologies
Bryan:
[12:40] to the things we care about the most. I mean, when death is inevitable, of course, humans have been shipping product for thousands of years to help humans cope with death. And so there's like no...
Ryan:
[12:54] Shipping product to cope with debt. That's an interesting way of thinking about it.
Bryan:
[12:58] Like you're going to die anyway. So people are like, you know what? We're going to reincarnate or there's heaven or like you can achieve immortality through professional accomplishment or you can have children. But people are selling products to help people cope with the fact that death is seemingly inevitable.
Bryan:
[13:15] Brian, have you ever read the book,
David:
[13:16] The Denial of Death by Ernest Becker? Yeah. The side quest, side bit of lore. I tried to re-email the foundation, Ernest Becker Foundation, to get him on the podcast, not realizing he had died like 12 years ago.
David:
[13:28] The whole premise of that book is that humans can't wake up and contend with their mortality without the weight of that just crushing their spirit every single day. And so what do they do? They push that to the back of their brain and they start working for symbols. The symbol of Christianity, the symbol of the American flag, the symbol of the Ethereum logo, the Bitcoin logo, sports teams, fraternities, any sort of club. And this has been one of the ways that we've coped with death. And that has like arisen to some of the biggest like moral frameworks that we've ever developed.
David:
[14:03] That sort of energy, right? Like democracy, Western liberal values. We have all of these moral philosophies that some of that energy goes into in establishing. I think one of the reasons why Ryan is uncomfortable here and why this is such a big deal and why this project is a project of like the highest order, the don't die project, is because it starts to get pretty fundamental to physics. We're going and we're talking about reversing entropy. And to some degree, all life is, is the controlling of entropy. Like the atoms are organized in a particular way so this organism can persist for the next day and the next day and the next day. And so it almost seems like this moral framework, the don't die moral framework, is kind of going all the way into the basement of like life's source code and saying, hey, let's flip this bit.
David:
[14:56] And tinker with our moral philosophy. But we're all the way in the basement. There's nothing deeper than this, which is why I think this project can be scary
David:
[15:05] to a lot of people because it's such a big paradigm shift.
Bryan:
[15:09] You nailed it.
Bryan:
[15:10] Yeah, that's why I think it packs. Yeah, Ashley Vance made this comment to me. He wrote the Elon Musk biography. He wrote the first article that went viral on Bloomberg about this project. He said to me, he's never seen so much energy behind any idea. It does. It just hits at the core of our understanding o f our existence. And it's, immediately provokes very strong reactions that people are not undecided on this topic. You know, they want to pull out, like, here are my five arguments on existence and why I think this is correct.
Ryan:
[15:46] Actually, I'm still undecided.
Bryan:
[15:48] Like, truthfully, I'm one of the people that's still undecided.
Ryan:
[15:51] Have there been any moral frameworks that we've had in place in the thousands of years of recorded human history that places existence as the highest virtue? you? Or is this all new? I get the point, Brian, that practically, that's kind of what we've been doing. We've always been trying to extend our existence, delay aging. And I'm just like you, yes. I am planning to be here next tomorrow and a week from now and I have things to do and I very much want that. But I can't think of a moral framework anywhere that just places existence as the central thing? Is there one or is
Bryan:
[16:36] This the first time? We've been stepping here. If you read of historical societies, there was a lot of senseless violence and killing. And we don't really tolerate that in our society. We've got universal human rights. We have laws in place about what you can and cannot do with harming someone else. Are the founding ideas are life, liberty, and the pursuit of happiness in the United States. So it definitely has followed a trend line where there's much more respect for the sanctity of life than there has been in previous civilizations. So it's on that trend line.
Ryan:
[17:15] Okay, it's on that trend line, but it's not ever been kind of the central thing.
Bryan:
[17:20] Yeah, I mean, actually, you think about it, Ryan, like really religion does
Bryan:
[17:24] kind of say that, right? They're like, This is not the final frontier. After we die, we go to this new... Like humans don't want there to be an end. And therefore, we have all of these, again, products and or stories to explain how it's not the end, how there is a continuation and how your behavior directly maps to what happens to you in those contexts. So, I mean, in many ways, don't die is the oldest idea ever articulated. I mean, it basically has been packaged and delivered to humans in so many different ways. So this is just a new framing to say like we actually have the technology to play Don't Die legit for the first time without needing to extend ourselves to mythology about what happens after death.
Ryan:
[18:13] Why do you think some people don't feel comfortable with this? Like I know this is the first time we've met Brian, so you don't know me yet. I'm very open-minded on this, but there's something that makes me feel like...
Bryan:
[18:25] Gives you the heebie-jeebies?
Ryan:
[18:26] Something about it. And it's some strange mix of like, I guess the feeling like, well, is this a greedy thing? It's like, I get don't die for humanity. I'm very much, I'm there. I don't want humanity to die. I want humanity to continue. I want my kids and my grandkids to be successful and prosperous thousands of years into the future. Like, that's fantastic. But for me individually to like seek don't die, something about that feels like selfish or copey. like I'm not making space for the next generation. It's kind of like the geriatrics in politics. It's kind of like us saying to the baby boomers, all right, you guys have had your time. You don't have to be 85 years old and still grasping at power in Congress. Make some room for those that are new.
Ryan:
[19:16] Maybe that's part of it. There's other parts of it too that I don't feel completely comfortable with. But this idea that don't die, at least at the individual level, is selfish. Where's that coming from, do you think?
Bryan:
[19:28] Yeah, my guess would be in the same way if.
Bryan:
[19:31] If I say, Ryan,
Bryan:
[19:33] I really like your shirt, right? Or Ryan, you did a great job in your performance today. And like I gave you a compliment. The socially appropriate thing would be for you to acknowledge my compliment and maybe offer something back to me as well, right? Like the concept of reciprocity.
Ryan:
[19:50] Yeah, thank you. And your shirt looks great too,
Bryan:
[19:52] Brian. Thank you, Ryan. I appreciate that.
Ryan:
[19:53] You're doing a great job today.
Bryan:
[19:54] Thank you. I appreciate that. So like reciprocity is part of our current social norms. You know, it's like you want to lob things back and forth. You want to create equality in the conversation. And if you didn't say that, you might feel a similar deficit. If you're just like, great, thanks. You might feel like that felt uncomfortable because I really wanted to volley something back and repay the kindness. That's right. So I'm imagining that running inside of us is our entire moral and ethical stack, which says, you know, if then statements, if complimented, return the compliment, you know, if presented with a situation where you could sacrifice for the help, you know, for the benefit of others, then do so. And that I, so I'm imagining that we all just basically are playing out these scripts that we've been taught our entire lives and they feel so natural to us that they feel true. But they're not true like in the form of physics or math. They're just ephemeral constructs that we have in our society that we are all acclimated to because that's all we've ever known. So I'm imagining that's what you feel intuitively. It's the natural tension of an idea that kind of bends upon itself and inverts everything that you've been trained to think your entire life.
Ryan:
[21:12] Yeah, it's a good human thing, I would say. It's part of being a good human is looking out for other humans around me and not just focusing on just myself. But maybe you can talk about the don't die movement isn't just for individuals.
Bryan:
[21:27] No.
Ryan:
[21:28] Okay. It's for everybody. Like I feel much better about it if I'm taking everyone else with me. If everyone is winning from don't die. If it's just me and I'm the guy in Silicon Valley who attaches himself to a blood boy because I'm rich, do you know that kind of thing? That's like what I get in my head when I think about this is it's sort of self-serving. It's only going to be for me at the expense of others. But maybe that's not what don't die is. Is this for everybody?
Ryan:
[22:00] And if so, how could it be for everybody?
Bryan:
[22:02] Yeah, it's entirely for everybody. And what it's observing is, so the principle is don't die individually, don't kill each other, don't destroy the planet, and align AI with don't die. So the contemplation is that we are not individual actors. We're part of a collective on this planet. and that when I behave a certain way, I'm influencing how you are behaving. Whether you want that to be true or not, we're all influenced by each other. And how I behave affects how my children behave, how we behave in our companies, affects how we treat the planet. These are all this, we're one integrated system. And so it's, It's trying to acknowledge we are a whole and that everything we do affects the other. And it's also the case that if we're thinking about how we build AI, I mean, Don't Die is entirely the reason why I'm doing this is because of AI. What we do with AI is literally the only question that matters at this point on planet Earth. There's nothing else that really matters. Like we, of course, we all want to play our games if we're doing this or that. it doesn't in context AI that we need to get this thing right and so it's trying to say what is the philosophical framework that.
Bryan:
[23:23] That is relevant for AI.
Bryan:
[23:25] And what I'm suggesting is it might be a good idea if we ourselves are that philosophy. If we actually practice don't die as examples to AI, like we acknowledge existence is the highest virtue. We don't want to be killing each other. We don't want to be killing ourselves. It's like a very practical example of this is like we've said children can't smoke cigarettes because it's bad for them. But we feed them die in the form of school lunch. you know, a piece of pizza, canned vegetables, a BPA, plastic line, carton of milk. And so we're feeding our kids dye. That's not a good thing. So don't dye is like clean up the food system so that our children are not consuming dye at school lunch. And so it's very much like it's a holistic, what do we do as a species kind of contemplation of, and it's really, it's ultimate sacrifice of you're willing to do this for the success of the species for all of intelligence.
David:
[24:19] There's that scene from Back to the Future where Marty McFly is, he grabs the guitar and he plays Johnny B. Goode. He plays some rock and roll. And then at the end of the scene, everyone's watching him and he's like, maybe you guys aren't ready for that yet, I guess. I guess you guys aren't ready for that. I think what your answer, Brian, is like, society's totally ready for this. And there has been a resurgence of like health and fitness culture in the United States. People are aware of things like microplastics of like red 39 or whatever that die is, there is like a growing crescendo of like, actually I'm going to really re-elevate the health as a top priority in my life. And then I think not only that is like there's some sort of cultural readiness, but I think you're also expressing there is a technological urgency with AI because just the relevancy of AI is like, this is actually not an option. This is actually how we get from wherever humanity is to like the higher phase of existence that is somehow with AI. Can you talk about like the happy case of like, say we do like align with AI, we have the don't die philosophy, AI absorbs that philosophy because we've been practicing it. What do we get from that? What's the end product? What are the spoils of this success?
Bryan:
[25:37] Yeah. I like this thought experiment of imagining that we're hanging out with Homo erectus, you know, about a million years ago, They've got an axe in their hand and you pose the question to them through the rudimentary form of communication. You know, tell me about the future of our species. Tell me what's exciting about what's coming. They would probably say, well, you're going to forage more and you're going to go to war and you're going to express their mental models of reality.
Bryan:
[26:01] But they won't say you're going to discover there's a microscopic world. There's atoms and particles. Or they'd say like in the air, you can't see it. There's waves that communicate information. Or that one day you're going to hold a little white thing and it's going to address your infection. And so at that moment, Homo erectus does not have any mental models to say anything intelligent about the future. So the only thing it could say is like, I don't know. I don't have the mental models. And so what I like in this case is I, of course, could fill the air with a bunch of words to your question. But I think potentially the more interesting contemplation would be, are we homo erectus relative to AI? Are our mental models of reality so primitive?
Bryan:
[26:47] Are our ideas just like so rudimentary relative to what's coming? We literally have no idea. which leaves us to say one thing that's intelligent is i don't want to die in this moment and so what i'm saying is is is really the ultimate expression of intellectual humility we may not know what's on the horizon we may not know what's coming and we may not even want to express any preferences about profit status power or some magical whatever we're going to create maybe the most prudent thing for us to say as a species is, this is big. We don't know exactly what it is. We may be limited in our ability to understand it. Therefore, the most sober and practical thing would be to say, let's not die. So let's reframe our moral framework to say, what can we do in this moment? What is in our control?
David:
[27:33] I know you said that because
Bryan:
[27:35] We have this.
David:
[27:36] Technological urgency of AI, we should not die. But does the argument to why we should adopt the don't die moral philosophy exists in a vacuum outside of AI? Or is AI actually like a critical component of like why we need to do this?
Bryan:
[27:50] Yeah, I mean, when death is inevitable, you can YOLO your way with any moral framework. And that's what society has done. Like you pick your political system, you pick your economic system, you do your thing. And we're now at this moment where this is possibly the first time in history where we may not die. If that's the case, it creates this opening to say, what do we do now? And so that's why this moment is so different. And it's unlike anything we've ever experienced before. And it's why, Ryan, I'm sure it causes you some consternation because it really does bump up against everything you've imagined of like what existence is about, what you are about as a person. You have ideals of like who you are, how you want to be seen, you want to be respected, you want to be part of the tribe, you want to feel good about yourself.
Ryan:
[28:38] You know, one thing as you guys have been talking about this that makes me more comfortable is a slight addition or addendum to don't die, which is like for whatever reason, I guess with my current moral framework circa 2025, it feels okay to say, I don't want you to die, or I don't want my kids to die, or I don't want those around me to die. That feels less selfish than focusing on I don't die at maybe all costs. So I guess when I think of don't die, if it's like you don't die or we don't die together, that feels like much better than a focus on I don't die.
Bryan:
[29:23] Is there something to that?
Ryan:
[29:25] Like, is the, you know, how do we, I guess if you shift the moral argument to it's you looking after everyone else, it somehow feels okay to me. Whereas if it's just like focused on me alone and my needs and my need to continue life forever, regardless of who around me dies, that feels somehow morally not okay. At least with the way my brain works right now.
Bryan:
[29:51] Yeah. I mean, I would, I would just add that. It's definitely not a selfish framework. It's really not meant to be selfish. This is not a pursuit of individual immortality. This is a more, I mean, really, if anything, it's more of like, we can't do this individually and it has to be a collective endeavor. Like no one person can pull this off. And so it's probably the biggest team sport. I mean, capitalism is a team sport, right? everybody on the planet participates in this some way. And don't die is probably even a more integrated team sport than capitalism is.
Ryan:
[30:32] Talking about don't die, there's been a subject. We've had Eliezer Yudkowsky on the podcast. He's written a new book. His existential concern, and he gives us a probability of about 99.9%, is basically AI is going to be misaligned and kill all of us. So it's definitely not don't die. I mean, I think his moral framework is don't die, but he's saying AI will kill us all. Does don't die help us fuse with AI in some way that doesn't kill us all? And also related to this, in that fusion, if that's the direction, this is sort of the answer that AI accelerationists always give is just like
Ryan:
[31:13] humanity and AI, they fuse together. How do we fuse together without losing our humanity? And the thing that Thank you. I don't know, is that spark of light that makes life worth living? Like, I don't want to be a robot. I'm struggling with this whole transhumanist idea in general, and I'm not sure that I want that future. Like, I'm not sure that I shouldn't actually resist that future.
Bryan:
[31:41] So first on Eliezer, he's a friend of mine. I really appreciate him. You know, he's earnest. He is genuinely trying to be useful to the world you know whether he's right or wrong tbd like you know i don't think anyone knows but i really do appreciate his earnest attempt, at trying to keep the flicker of intelligence on this earth you know with three humans alive so, it's just it's a very hard i mean it's very hard to get your head around this idea so i appreciate his earnestness number two is you know if you go back to the example of of homo erectus. And they're saying, look.
Bryan:
[32:21] If I can't do what
Bryan:
[32:22] I'm doing now, if I can't be homo erectus, if you're telling me I'm going to evolve to be a homo sapien, and homo sapien is this, I don't know if I want to go forward. And so we, you know, it's, we're an evolutionary species. We are today something that is unimaginable to previous versions of our ancestors. And so there's a possibility that the move here is to suspend all of our beliefs.
Bryan:
[32:51] On who we
Bryan:
[32:52] Think we are, on what we think we want, on what we aspire to be. I'm personally there. Like I'm wide open on absolutely anything. And so like.
Bryan:
[33:04] The way to really,
Bryan:
[33:05] If you want to give yourself a real workout on this question, here's a thought experiment for you. Imagine that your existence is nowhere to go, nothing to do, no one to become. That's your existence.
Ryan:
[33:19] I mean, that's not really, that's not existence, is it? I mean, that doesn't feel like existence.
Bryan:
[33:24] Exactly.
David:
[33:25] Feels like jail.
Bryan:
[33:26] Exactly. This is why the thought experiment is so good, is because those three things offend in the most potent way possible all of our sensibilities about existence. You can't find anything more offensive to our consciousness than those three ideas. And so the question is, are you open-minded enough to say, I'll give it a shot. Like, give me a month and let me see how I feel about it. But it's, it's, we're really like, we have to understand our knee jerk reactions are when things don't square with how we understand the world, we want to immediately batter it away because it's so uncomfortable. So these topics are very uncomfortable. Like when you talk about the future, everybody's got an answer. It's like, this is going to happen. This is going to, this is going to happen. Everyone, but like, nobody knows. Everyone's just spinning up words because they're trying to fill the dead air because they don't know.
Ryan:
[34:19] Mm-hmm.
David:
[34:21] Brian, when we as a species, as humanity, gain control over aging, that's like something that we've like unlocked that piece of technology on the tech tree. We just control aging and we achieve what's known as like the, what is it called? The longevity escape velocity where science puts more time on our lifespan than the time is passing it takes to put it there. And so theoretically, we have longevity. I would imagine that that comes with a bunch of other unlocks too. Like it doesn't really stop at longevity. At that point, like AI will be much more powerful. You know, gene editing, as I understand, you aren't really into the world of gene editing quite yet, but I think you're curious about it and it potentially contains possibilities for what we can do here. It seems like there's a little bit of a Pandora's box there. It's like we open up, the longevity and then out comes a bunch of consequences for better or for worse
David:
[35:21] that we then therefore have to contend with. And like the big, the big broad strokes of it all is like humans kind of get the ability to upgrade ourselves. Like once, once we achieve longevity, we can kind of figure out the science to kind of upgrade ourselves in any particular way. Does this, I'm sure this is scary to Ryan. Does this excite you? Like how do you feel about this? What about this part of this story arc excites you? What do you want to do with those powers?
Bryan:
[35:48] Yeah, if you think in parallels, humans figured out how to arrange atoms in the physical world. And we built homes and skyscrapers and bridges and boats and airplanes. We've built all sorts of objects with these atoms.
Bryan:
[36:06] We had a new programmable playbox of software where now you have zeros and ones and what can you build with that? We've had that sandbox for a couple of decades now. And you look at the world of software of what we've done, that sandbox. Our biological systems is a new sandbox. You've got this genetic code. You've got this amazing ability to alter it. And humans will do what we've always done is once you acquire those tools to start playing with that sandbox, we're going to have all kinds of creative flourishing. We just don't even know where we can go with it. And so, yes, I mean, it's happening now. We see it happening in gene editing. We see it happening, right, in all sorts of applications. So it's the next sandbox. And that's what I'm saying. We've done this with the physical world. We've done this with the digital world. Next is the biological world. And that's what's squarely interesting right now is then once that is on the table, we have the ability to not just do physical form in the body, but also how we, like, our conscious existence. Yeah. You start playing with the things that give us rise. So yeah,
Bryan:
[37:12] I mean, it's the next frontier. And this is why I'm saying that it's healthy, even though it's uncomfortable, it's healthy to suspend belief and just say like.
Bryan:
[37:22] I can pipe off and imagine a few things, but also it's really healthy for me to say like, this is probably past my ability to imagine, probably exceeds my capacity to imagine as a human, which is a nice healthy balance where not seeing the unknown is really good.
David:
[37:37] I think one of the concerns that people have that I have is that, you know, previously with all of the progression of the Homo Habilis, Homo Erectus, Homo Sapiens, they all happened in consequential order and, you know, one ended and then the next started in some like fuzzy way. We are in the world, the era of technological accelerationism, where things are happening really fast. Five years ago is 2020, and now we have AI and robots and self-driving cars, and we have this guy, Brian Johnson, who's trying to achieve immortality. Shit's getting weird. Shit's getting weird in 2025, and it only seems to be getting faster.
David:
[38:21] And I kind of now see, I know you are trying to be very humble about we can't predict the future. I can kind of see this future of we have a collection of people who achieve longevity escape velocity. They get chips in their brain via brain link interfaces that multiple people are building. These are real startups. And we start to edit our DNA because we have that power and that access. And all of a sudden, we have these hyper-thinking, AI interwoven immortal humans walking around next to normal humans. And it seems to be we split into Homo sapiens, which is what everyone is today. Everyone's a Homo sapien into what Yuval Noah Harari calls something like Homo Deus. And that's happening at the same time. And we have the Homo Deus humans and we have the Homo sapiens. Crypto Twitter might call this the permanent underclass, but we have two tiers of humans. And one starts to look a little bit like deities on top of Mount Olympus and the rest are us. And that is a little bit concerning to me in a way. Have you thought about this?
Bryan:
[39:31] Yeah.
Bryan:
[39:32] I mean, what you're articulating is basically you took the foundational principles of today's society, which says power, wealth, status individually as the ultimate pursuit, and you applied that to these tools. I did. So it's a very natural pattern matching. And so I agree with you that you can pattern match and you can make those observations. Totally legit. And what I'm saying is, I don't know if it's wise if we take our power status wealth principles and carry them forward. Not to say that they're by themselves that they're bad. It's that when they're the ultimate prize, and we're willing to pay any price, including my life or your life, it may not be the right environment to do that. And that's why I'm saying that as we have all of these advanced technologies to create this magical world, but also to destroy ourselves and also create dystopian outcomes, that it might be a really fresh opportunity to say, what is a conducive moral framework where intelligence thrives together?
Bryan:
[40:42] And so I'm really trying to get at the core of this problem of how to avoid those kinds of situations. But also, we still want the powers of progress. We want to push things forward. We want to better ourselves. We want to cure disease. And so that's the dance I'm trying to create is not say this with certainty, but just to say like, if I can say anything intelligent at all, I go back to this basic instinct, which is no human wants to die. If you look at planet Earth, you look at all the biology on planet Earth, even if you look at humans, That's the basic instinct of biology. It doesn't want to die. Not right now.
David:
[41:16] So are you saying that I did the hunter-gatherer thing of like, yo, when we improve and get better, we're just going to be able to forge so much more. And I'm actually just not really able to contend with the future. And so what you're saying is like, and I can't even do that. And so what am I doing? I'm just not doing it, but I'm doing the one thing that will buy me some time, which is not dying.
Bryan:
[41:36] Exactly right. Yeah. I mean, it's very sensible and practical for what you're saying is you take the current technologies, you mix it up in a stew of current philosophical and ethical frameworks and you have the output totally understandable like and that's what i'm saying i don't know if that's the recipe we want to cook.
Ryan:
[41:52] So guys, if we're not going to... It's like you were talking about nature and how it's sort of in biology that all of the natural world, including us, don't
Bryan:
[42:02] Want to die, okay?
Ryan:
[42:03] But it also seems like biological, almost like a natural principle that the old does die and make way for the new. I mean, this is a kind of a question for this don't die world. Let's say we achieve don't die, okay? Well, this goes back to... Baby boomers in Congress grasping at power. This goes back to the idea of science proceeds one funeral at a time. This goes back to the old has to make way for the new,
Bryan:
[42:34] Where you have.
Ryan:
[42:35] A stagnant society. Imagine if we were locked in to
Bryan:
[42:39] The people in the
Ryan:
[42:40] 1600s and their moral framework and their idea set, and they just kept living. And they didn't innovate. They didn't make way for something new.
Ryan:
[42:49] There was no fresh ideas.
Bryan:
[42:50] There was no creative destruction.
David:
[42:52] Yeah.
Ryan:
[42:52] And so, like, could this be a path towards stagnation for us? Like, we are the don't die generation, and then that was it.
David:
[43:01] Things were frozen in time.
Ryan:
[43:03] We never improved. We just, like, figured it out, and that's it. Is there a worry that you have about that? Or, like, how do you address that? How do you address the idea that what if Vladimir Putin didn't die, for instance, gave no way for Russian society to evolve?
Bryan:
[43:21] Yeah. First, so this is just an, it's a societal engineering problem. So if this is why we have term limits, you know, you want to cap someone's power in a certain duration of time. And so if you just simply apply this to this technology, the same situation, you know, you can solve it. And so it's really, it's one of the more easier, easier problems to solve as a species. And, you know, if, if you take a given ruler who doesn't want to be voted out of office, you know, societies have a way of revolting. And overthrowing rulers. That's always been the case. And so if someone's old and stuck on their ways, maybe a rejuvenation technology will... There's actually technology today where you can take an adult cell and turn it into a pluripotent stem cell. We can do this today. Now, we don't have.
Bryan:
[44:09] It as a
Bryan:
[44:09] Therapy because it has off-target effects and it can lead to cancer, but that technology will be improved over time. So maybe the solution is you take an old person and rejuvenate themselves and make them a young person and they're fresh again. They're back at the same open-mindedness. Now, you can still, it doesn't mean they're still in office. It could be like, they still have term limits, but these kinds of ideas, it's just a societal engineering problem. And like on the scale of problems, very easy.
Ryan:
[44:34] Really? So on the scale of problems, that seems like the hardest one, actually, because the societal engineering problems are like, those are the deep seated coordination problems that we struggle with.
David:
[44:44] And those are the ones that cause wars.
Ryan:
[44:46] Right. And so power, status, wealth, the things if you've been around for a couple hundred years, I'd imagine you'd be able to accrue a lot of those things in your, you know, hundreds of year lifespan versus somebody who's new. And wouldn't you power status wealth your way towards consolidation of those things? And then how is the new entrant supposed to disrupt you when you've accrued all of these things? Imagine Warren Buffett across thousands of years compounding his interest. He's already one of the richest guys in the world. I mean, that seems like that would consolidate into a setup that would be very hard to coordinate against and actually disrupt. That's sort of the stasis argument, I think.
Bryan:
[45:32] Yeah. For example, one thing that's happened in the US is there's a death tax. When you pass, a meaningful portion of your estate is taxed by the government. And that's some attempt to trying to level the playing field of stopping this generational wealth from being out of control. Now, you can argue whether it's been effective or non-effective, or it's too much or too little, but still society has tried to acknowledge that as a problem. And we're currently in.
Bryan:
[45:59] In our current system, there's a pretty substantial gap in wealth. And I think everyone acknowledges that that's probably not a good idea that you have this kind of disparity. And so I'm guessing there's going to be a correction on this. And so you can imagine where in this situation, maybe someone's wealth has a tax every some block of time to prevent that kind of accumulation. But I think that society generally, even though it's been rocky and sometimes it's resulted in wars, society does correct itself when it does go to extremes naturally. Now, this is also an assuming that in this situation, we're contemplating that humans are still the primary power actors, that humans are still the ones doing these systems. Now, if AI steps in, and maybe society is run much more with autonomous systems, and maybe humans have less power to control these things, and maybe it's more indirect. So I guess all these things for me are on the table of, I'm not totally confident that what exists today in terms of power will be the same things that exist
Bryan:
[47:03] in power in five years from now. And so they may, but I guess what I'm saying is the first problem is how do we not die as a species?
Bryan:
[47:11] If you're not dead, then you have the luxury of solving these other problems, which is like, how do you prevent runaway power, runaway wealth accumulation, et cetera. So I think I'd rather take on that problem that I would be dead.
Ryan:
[47:23] Brian, I'm curious if you could just for me, so map out what it would look like for you to live, say, 500 years, right?
Bryan:
[47:34] So...
Ryan:
[47:35] You're on the bankless podcast we very much are tech accelerationists so we think that a singularity is approaching is near but like practically and i know some of the tech might be sci-fi but if you were to think you know based on what you know right now about a 500 year lifespan for yourself yeah what would that trajectory actually look like across the decades and across the centuries.
Bryan:
[48:01] Yeah.
Bryan:
[48:01] I mean, it, I, the only thing I can map this to that has familiarity to me is Homo erectus, like saying, Hey, Homo erectus, like, let me explain to you what your life is going to be like as you go through your various stages. Like first, you know, you're going to live like four times longer than you're maybe expected 20 to 30 years of life. And then you're going to go to the following stages of life. And every stage you articulate would be entirely foreign to them. There may be some things like mating, like you're going to choose a mate, you're going to have offspring that would be a commonality but otherwise i think it'd be pretty novel for them to contemplate like imagine trying to explain web 3 like you know what is like what is web 3 and you have to like walk down like 25 layers to get to the concept of so they understand what web 3 is but i guess.
Ryan:
[48:48] Brian broad strokes are you talking about like these lifespans being purely biological are you talking about like taking your consciousness and putting this in silicon somewhere and fusing with the machines like yeah what's on the
Bryan:
[49:02] Table yeah i guess all i'm saying is i i take i i answer this question because the majority of people that hang out in this space of trying to speak about the future they are overflowing with ideas on what existence will become, And what I find absent is anybody saying, I have no idea, you know, like literally no clue. And anything I say is probably stupid to that kind of extreme. I'm trying to provide a relative contrast that not knowing is equally as intelligent as speculation or of pattern matching. You know, it reminds me of that humorous comic where there's a gentleman, there's a person under a street lamp or someone walks by the arcade, what's going on? And they say, I'm looking for my keys. Why, you know, where'd you not see them? Like over there. And he's like, why are you looking here? He said, cause the lights here, you know, we have this, we have this proclivity to look in the light and we don't look in the dark. So it's just a natural bias we have. And so I, I try to play the role of reminding ourselves most times throughout history, we've had no idea.
Bryan:
[50:19] And a lot of times we've been stunningly surprised with what's come up. And I'm trying to balance out the contemplations with that.
David:
[50:27] There was a reply to my tweet saying, hey, we're interviewing Brian Johnson on the podcast today. And the reply was like, I find his ideas compelling, but I wouldn't really want to do it unless my loved ones also did it with me. And my reply to that was like, yeah, I kind of think that's right. Like this is kind of like a society's saying to itself, I'll do it if you do it. Like I'll go attempt to live forever if you also go attempt to live forever. And so the way that I think that this works is like most people say no, most people say no, most people say no. But then all of a sudden it's in people's heads. The meme has been incepted enough. And all of a sudden just a few people say yes. And all of a sudden all of society says yes. And then we're like, okay, we're all doing it. how far along on that arc do you think we are?
Bryan:
[51:16] I think, here's my, I'll only say one thing about the future.
David:
[51:21] Yeah, you are very hard
Ryan:
[51:23] To get to say anything about the future.
Bryan:
[51:27] Okay, so here's what I think could potentially happen. AI progresses and we have some kind of moment with AI. Maybe it's a Chernobyl moment. Maybe it's something more benign, but it's a moment. And it's a moment where the world is like oh my god.
Bryan:
[51:47] This is real. Like I had imagined AI of like helping me write emails and code faster and looking at my medical images better. And like I had all these imaginations, but like this is a really significant situation. And maybe there's one of these occurrences. Maybe there's multiple of these occurrences. But it will happen and it will create this sobriety where we say, this is literally the only thing that matters. Like nothing else matters because it's such a big deal. And human society typically can only have two ideas in their mind. right or wrong. And so if you think recently to COVID, as nuanced and complicated as a situation as that was, the world bifurcated into two opinions, masks, no masks, vaccine, no vaccine, shutdown, no shutdown. It just forks. And we think good or bad and those two forks. And in that moment, it will create this bifurcation with AI of die, don't die, where there will be people who will say, it's not worth doing blank for the pursuit of power wealth status. I see the situation here for what it is. I don't want to die. And that's my guess is that don't die right now. It really hurts to think about because it challenges everything you understand about existence. But if you remember in the early, the first month of COVID, Do you remember the entire world shut down on a dime?
Bryan:
[53:13] Everybody's plans stopped in a week or a month. It was unbelievable. That's how bad humanity doesn't want to die. Now, of course, like after we were like, okay, like we understand it's like not the plague or something. Then we started like going about human things. We started fighting about shutdowns and vaccines, all that kind of stuff. But like never underestimate, never short. Yeah. The idea of how badly someone doesn't want to die.
Bryan:
[53:39] And so that's what I
Bryan:
[53:40] Think is going to happen. And I think people are going to come around very, very hard to don't die, even though right now it's kind of a trickle where it's like people are starting to try to get acclimated to it. Like, what is this thing? How should I feel about it? Is it selfish? Is it not? Is it good? Is it bad? It's just a natural dance humans do to try to digest a new idea.
Ryan:
[53:57] I think for me personally, after understanding more of the philosophy behind it, I feel better about it, right? So there has been a move in that direction. for me. And maybe we can talk about the practical implementation of Don't Die. So we've been talking a lot about the future and the philosophy and the like, is it good or bad and why?
Ryan:
[54:19] You've got a whole practical program around this called Blueprint. And I think, Brian, you know that you raised $60 million to kind of bring Blueprint to the masses. So for myself, who's kind of totally with you that society has made us unwell, and we have all of these addictions and all of these problems, and they're trying to kill us for profit. How could Blueprint help someone like me? Because I don't want to dedicate my life in the way that you have. I've got other things. You're doing the whole all-in Brian Johnson, don't die thing, and that's definitely not me. But I share some of the goals that you have, would Blueprint be for me? Or like, what's the program designed for? What does it do?
Bryan:
[55:07] Yeah, exactly for you. So the origin of this is I basically, like a script, played out, early 2000s entrepreneurship culture, where I started as an entrepreneur. I didn't sleep much. I didn't exercise. I didn't eat well. I was ragged. And in those environments, you brag that you only got three or four hours of sleep because you want to be seen with high status among your group that you work very hard. You don't really need sleep. You're beyond that. Totally get it. So I got depressed. I got hopelessly depressed. Now, fortunately, I sold my company, made a bunch of money. And I realized that I was trapped in that cultural system that says, kill yourself because the money's worth it.
Bryan:
[55:55] Right? It's a kind of weird trade. And then I was owned by all the addictions of society, fast food, junk food, et cetera. And so I had to dig myself out of that. So I hired a team of quite a few doctors. I started spending a lot of money. And I basically tried to build the world's most formidable evidence-based protocol. Like if you just look at the scientific evidence and you take you to the process where I try to measure the biological age of every organ of my body. So like I was 42 at the time, but my brain is a certain age. My left ear is age 64. My heart is 37. You know, my cardiovascular ability is like you have different ages across your entire body. And you look at the scientific evidence and say, how do you then slow down the aging or reverse it? I did that for several years trying to achieve, you know, exceptional biomarkers. And so in short, what I was trying to build is my autonomous self, which means instead of me going out and trying to like forage food every day and figure out how to do the thing, I wanted to say, I'm going to measure my body extensively. I'm going to give it to AI, you know, run it through computational processes, compare it with the evidence, and then bring it back with protocols. And I'm just going to follow the protocol. And so what Blueprint is trying to do is it's trying to say, if the person says, I want to be healthy. I want to be healthy.
Bryan:
[57:07] But I don't want to spend my time researching if seed oils are good for me or bad for me.
Bryan:
[57:12] I don't want to spend my time seeing how much protein I have to consume. I don't want to spend my time doing blank. We're just going to say, do this. And it's going to be based upon your biomarkers, on your genetics, on your situation. We're going to try to just basically automate the entire process. Now, we're not there yet, but the idea is people don't want to spend the time and they don't want to be confused and have to chase this thing down because it's an endless endeavor, as we all know. So yes, like you would be our perfect customer, but we're trying to solve is how do we basically, how do we give you exceptional well-being with the least amount of effort possible?
Ryan:
[57:48] Does, are the limits of that well-being, health, nutrition, exercise, those sorts of things? I mean, there are so many well-being and even I believe longevity studies that actually link lifespan to things like the quality of relationships in your life.
Bryan:
[58:02] I mean, would you go.
Ryan:
[58:03] There as well? I mean, yeah, talk about that.
David:
[58:07] Can I buy a friend? Can I buy a girlfriend?
Bryan:
[58:11] So yes, we are working on that. We're trying to outperform every health system in the world, every concierge system in the world, because their model is typically to sell you stuff, like sell you therapies. But the highest value therapies are having a good relationship and having friends and going to bed on time and not eating fast food. So it's really about a lot of behavioral things. And so there's a lot of ways we're working on helping you with those hooks so that you start incorporating good habits. This is why Don't Die is not a selfish endeavor. You are.
Bryan:
[58:46] Your friends, you are your family, you are your co-workers, like you all, you naturally adopt those practices. So it's basically saying this is a team sport. And so that's why we're going to take this on is, I mean, I learned this principle when I was raising my son, I was teaching him how to swim. I was doing some research as a young father, like, how do I teach my child how to swim? I saw there were three ways. One, you could push him in the pool and say, good luck. Two, you jump in the pool and say, jump, you know, come to me. Or three, you show him a video of his friend swimming. The third one is the one that works best, right? But when your friends do something, you want to do that as well. And so, yes, this is entirely like, how do we actually adopt positive lifestyles and then therapies where appropriate.
Ryan:
[59:26] Brian, this was great. I really appreciate it. You know, because we're normally a crypto podcast, I do have to ask this though. If you're building Venmo today, sir, would you use cryptocurrency?
Bryan:
[59:36] Yeah, so I built BrainTree. I started BrainTree in 2007. I sold it in 2013. I think we were the first company to integrate Coinbase. Wow. So, yeah. So I was bullish on crypto. And had I not sold Braintree, I'm guessing I would have been all in on Web3 over the past 12 years. That all the things that have developed. So I've looked at how the industry has matured. And I have to say, what an amazing space to build in. There's so many cool things being built and i really admire companies like you know brian armstrong then at the time like he was in and you know he's just plugged away and so yeah i i'm very bullish on web3 in fact i i've been poking at crypto for the past year trying to figure out how to find the marriage between don't die and web3, I don't want to do a token. You know, I don't want to do something where it's like, people are like, this is not a money grab. This is like, how do you build sophisticated infrastructure for don't die?
Bryan:
[1:00:45] So I've been poking at it and I haven't found it yet, but it's definitely on my radar. I really love what the industry is doing and I'm excited about the ways it can work together.
David:
[1:00:55] Well, we appreciate you being patient with finding the right solution in crypto. We know bad things can happen when people are rushed into crypto. Brian, this has been fantastic. I've learned a lot. I'm very inspired. Before I got into crypto, I was on my own career pursuit trying to figure out how to integrate physical therapy, mental health and nutrition into one private practice. And I kind of think if you add a bunch of science and research, you actually end up with what you're doing. And so it's very refreshing to kind of come back around full circle and touch base with who's really pushing the frontier in that world. And so thank you for doing what you're doing and wish you the best of luck.
Bryan:
[1:01:33] Thank you, David. I have to ask, where are you both at after this conversation?
David:
[1:01:39] Ryan, you want to go first?
Ryan:
[1:01:40] I'm warmer on it. I'm definitely warmer. I'm still skeptical of the details of like transhumanism and like how that might, how we might merge and what longevity would actually look like. I also very much think that quality of life is kind of important, which is why I like the idea of you kind of expanding into other areas. But I think the thing I'm most at peace with after this is it's not just don't die for me it's don't die for us and i think that flips the whole isn't this whole thing selfish on its head and fits better with my moral framework of things so i'm leaving this conversation warmer for sure yeah
Bryan:
[1:02:22] Cool david what about you.
David:
[1:02:23] Yeah so like i once upon a time was kind of pursuing
David:
[1:02:28] Longevity in a very lucid sense way back way back in the day before i found crypto i was like reading david sinclair and like some of those like more old school books and kind of also doing like a little bit of just like self-experimentation like carnivore diet keto like all that kind of stuff fasting and i found it was actually very solitary because you can't really hang out with friends or go on dates if you're a carnivore like it doesn't it's not totally compatible
David:
[1:02:55] And that was actually the thing that i found to be the most having the most friction and so So the notion that there is a system out there which makes it easy for me and not just me, but my friends and my local family, and this has become a social norm, is what excites me about this. And so I was always warm on it to begin with. Getting into crypto, I was like, oh, well, I've just abandoned all of my healthy habits and now I sit in front of the computer for 16 hours a day. And so now i'm trying to get those back but i find it highly enjoyable that this is now being pushed to be like a social norm so we can all kind of like revel in it and so much much more warm yeah yeah cool brian thanks for coming on today