LIMITLESS - Dwarkesh Patel: The Scaling Era of AI is Here

Ryan:
[0:02] Hey guys, we have a special episode today. We have Dorkesh Patel on the podcast. Now, Dorkesh is probably one of my favorite podcasters. He is most in my podcast rotation list. And specifically, he's great on AI topics, which is the subject of today's episode. This is about the scaling era of AI. The scaling era is something new in AI, new for humanity. It's something we've never seen before. And Dwarakash is very much on the frontier of this movement. So we get through the full history and where we've come today. And I loved every minute of this conversation. Now, you can usually catch episodes like this on our Limitless podcast feed. That's where my co-host Josh and David and Ejaz all go deep into the AI rabbit hole. It's like Bankless, but for AI. So if Limitless is not in your podcast feed rotation yet, you got to go subscribe to it.
Ryan:
[0:53] Catch it on Spotify, on YouTube, or wherever you access these podcasts. Now, on crypto, I did ask Dwarkech when he's going to do a crypto podcast, because I would very much love to hear Vitalik Buterin on Dwarkech, but it still might be a while out. Apparently, the only podcast Dwarkech has ever done on crypto was with Sam Beckman Freed, and we all know what happened there. It's more hangover, I guess, from the crypto Criminals of 2022. But we've certainly come a long way since then, and so has AI. Please enjoy this episode with Dwarkesh Patel. Dwarkesh Patel, we are big fans. It's an honor to have you.
Dwarkesh:
[1:32] Thank you so much for having me on.
Ryan:
[1:33] Okay, so you have a book out. It's called The Scaling Era, An Oral History of AI from 2019 to 2025. These are some key dates here. This is really a story of how AI emerged. And it seemed to have exploded on people's radar over the past five years. And everyone in the world, it feels like, is trying to figure out what just happened and what is about to happen. And I feel like for this story, we should start at the beginning, as your book does. What is the scaling era of AI? And when abouts did it start? What were the key milestones?
Dwarkesh:
[2:06] So I think the undertow story about, everybody's of course been hearing more and more about AI. The under-told story is that the big contributor to these AI models getting better over time has been the fact that we are throwing exponentially more compute into trading frontier systems every year. So by some estimates, we spend 4x every single year over the last decade trading the frontier system than the one before it. And that just means that we're spending hundreds of thousands of times more compute than the systems of the early 2010s. Of course, we've also had algorithmic breakthroughs in the meantime. 2018, we had the transformer. Since then, obviously, many companies have made small improvements here and there. But the overwhelming fact that we're spending already hundreds of billions of dollars in building up the infrastructure, the data centers, the chips for these models. And this picture is only going to intensify if this exponential keeps going. Forex a year, over the next two years, is something that is on the minds of the CFOs of the big hyperscalers and the people planning the expenditures and training going forward, but is not as common in the conversation around where AI is headed.
Ryan:
[3:19] So what do you feel like people should know about this? Like what is the scaling era? There have been other eras maybe of AI or compute, but what's special about the scaling era?
Dwarkesh:
[3:29] People started noticing. Well, first of all, in 2012, there's this, Ilya Seskaver and others started using neural networks in order to categorize images. And just noticing that instead of doing something hand-coded, you can get a lot of juice out of just neural networks, black boxes. You just train them to identify what thing is like what. And then people started playing around these neural networks more, using them for different kinds of applications. And then the question became, we're noticing that these models get better if you throw more data at them and you throw more compute at them. How can we shove as much compute into these models as possible?
Dwarkesh:
[4:11] And the solution ended up being obviously internet text. So you need an architecture which is amenable to the trillions of tokens that have been written over the last few decades and put up on the internet. And we had this happy coincidence of the kinds of architectures that are amenable to this kind of training with the GPUs that were originally made for gaming. We've had decades of internet text being compiled and Ilias actually called it the fossil fuel of AI. It's like this reservoir that we can call upon to train these minds, which are like, they're fitting the mold of human thought because they're trading on trillions of tokens of human thought. And so then it's just been a question of making these models bigger, of using this data that we're getting from internet techs to further keep training them. And over the last year, as you know, the last six months, the new paradigm has been not only are we going to pre-train on all as internet text, we're going to see if we can have them solve math puzzles, coding puzzles,
Dwarkesh:
[5:11] And through this, give them reasoning capabilities. The kind of thing, by the way, I mean, I have some skepticism around AGI just around the corner, which we'll get into. But just the fact that we now have machines which can like, reason, like, you know, you can like, ask a question to a machine, and it'll go away for a long time, it'll like, think about it. And then like, it'll come back to you with a smart answer. And we just sort of take it for granted. But obviously, we also know that they're extremely good at coding, especially. I don't know if you actually got a chance to play around with Cloud Code or Cursor or something. But it's a wild experience to design, explain at a high level, I want an application to does X. 15 minutes later, there's like 10 files of code and the application is built.
Dwarkesh:
[5:53] That's where we stand. I have takes on how much this can continue. The other important dynamic, I'll add my monologue here, but the other important dynamic is that if we're going to be living in the scaling era, you can't continue exponentials forever, and certainly not exponentials that are 4x a year forever. And so right now, we're approaching a point where within by 2028, at most by 2030, we will literally run out of the energy we need to keep trading these frontier systems, the capacity at the leading edge nodes, which manufacture the chips that go into the dyes, which go into these GPUs, even the raw fraction of GDP that will have to use to train frontier systems. So we have a couple more years left of the scaling era. And the big question is, will we get to AGI before then?
Ryan:
[6:40] I mean, that's kind of a key insight of your book that like, we're in the middle of the scaling era. I guess we're like, you know, six years in or so. And we're not quite sure. It's like, like the protagonist in the middle of the story, We don't know exactly
Ryan:
[6:52] which way things are going to go. But I want you to maybe, Dworkesh, help folks get an intuition for why scaling in this way even works. Because I'll tell you, for me and for most people, our experience with these revolutionary AI models probably started in 2022 with ChatGPT3 and then ChatGPT4 and seeing all the progress, all these AI models. And it just seems really unintuitive that if you take a certain amount of compute and you take a certain amount of data, out pops AI, out pops intelligence. Could you help us like get an intuition for this magic? Like, how does the scaling law even work? Compute plus data equals intelligence? Is that really all it is?
Dwarkesh:
[7:37] To be honest, I've asked so many AI researchers this exact question on my podcast. And I could tell you some potential theories of why it might work. I don't think we understand. You know what? I'll just say that. I don't think we understand.
Ryan:
[7:52] We don't understand how this works. We know it works, but we don't understand how it works.
Dwarkesh:
[7:55] We have evidence from actually, of all things, primatology of what could be going on here or at least like why similar patterns in other parts of the world. So what I found really interesting, there's this research by this researcher Susanna Herculana Huzel, which shows that if you look at how the number of neurons in the brain of a rat, different kinds of rat species increases, as the weight of their brains increase from species to species, there's this very sublinear pattern. So if their brain size doubles, the neuron count will not double between different rat species. And there's other kinds of
Dwarkesh:
[8:39] Families of species for which this is true. The two interesting exceptions to this rule, where there is actually a linear increase in neuron count and brain size, is one, certain kinds of birds. So, you know, birds are actually very smart, given the size of their brains, and primates. So the theory for what happened with humans is that we unlocked an architecture that was very scalable. So the way people talk about transformers being more scalable in LSTMs, the thing that preceded them in 2018. We unlocked this architecture as it's very scalable. And then we were in an evolutionary niche millions of years ago, which rewarded marginal increases in intelligence. If you get slightly smarter, yes, the brain costs more energy, but you can save energy in terms of like not having to, you can cook, you can cook food so you don't have to spend much on digestion. You can find a game, you can find different ways of foraging.
Dwarkesh:
[9:31] Birds were not able to find this evolutionary niche which rewarded the incremental increases in intelligence because if your brain gets too heavy as a bird you're not going to fly um so it was this happy coincidence of these two things now why is it the case that the fact that our brains could get bigger resulted in resulted in us becoming as smart as we are we still don't know and there's all there's many different dissimilarities between ais and humans while our brains are quite big we don't need to be trained, you know, a human from the age they're zero to 18, is not seeing within an order of magnitude of the amount of information these LLMs are trained on. So LLMs are extremely data inefficient. They need a lot more data. But the pattern of scaling, I think we see in many different places.
Ryan:
[10:18] So is that a fair kind of analog? This analog has always made sense to me. It's just like transformers are like neurons. You know, AI models are sort of like the human brain, evolutionary pressures are like gradient descent, reward algorithms, and out pops human intelligence. We don't really understand that.
Ryan:
[10:37] We also don't understand AI intelligence, but it's basically the same principle at work.
Dwarkesh:
[10:42] I think it's a super fascinating, but also very thorny question because is gradient intelligence like evolution? Well, yes, in one sense. But also when we do gradient descent on these models, we start off with the weight and then we're you know it's like learning how does chemistry work how does coding work how does math work And that's actually more similar to lifetime learning, which is to say that like, by the time you're already born to the time you turn 18 or 25, the things you learn, and that's not evolution. Evolution designed the system or the brain by which you can do that learning, but the lifetime learning itself is not evolution. And so there's also this interesting question of, yeah, is training more like evolution? In which case, actually, we might be very far from AGI because the amount of compute that's been spent over the course of evolution to discover the human brain, you know, could be like 10 to the 40 flops. There's been estimates, you know, whatever. I'm sure it will bore you to discover, talk about how these estimates are derived, but just like how much versus is it like a single lifetime, like going from the age of zero to the age of 18, which is closer to, I think, 10 to the 24 flops, which is actually less than compute than we use to train frontier systems. All right, anyways, we'll get back to more relevant questions.
Ryan:
[11:58] Well, here's kind of a big picture question as well. It's like, I'm constantly fascinated with the metaphysical types of discussions that some AI researchers kind of take. Like a lot of AI researchers will talk in terms of when they describe what they're making, we're making God.
Ryan:
[12:14] Like, why do they say things like that? What is this talk of like making God? What does that mean? Is it just the idea that scaling laws don't cease? And if we can, you know, scale intelligence to AGI, then there's no reason we can't scale far beyond that and create some sort of a godlike entity. And essentially, that's what the quest is. We're making artificial super intelligence. We're making a god. We're making God.
Dwarkesh:
[12:38] I think people focus too much on when the I think this God discussion focuses too much on the hypothetical intelligence of a single copy of an AI. I do believe in the notion of a super intelligence, which is not just functionally, which is not just like, oh, it knows a lot of things, but is actually qualitatively different than human society. But the reason is not because I think it's so powerful that any one individual copy of AI will be as smart, but because of the collective advantages that AIs will have, which have nothing to do with their raw intelligence, but rather the fact that these models will be digital or they already are digital, but eventually they'll be as smart as humans at least. But unlike humans, because of our biological constraints, these models can be copied. If there's a model that has learned a lot about a specific domain, you can make infinite copies of it. And now you have an infinite copies of Jeff Dean or Ilya Satskova or Elon Musk or any skilled person you can think of. They can be merged. So the knowledge that each copy is learning can be amalgamated back into the model and then back to all the copies. They can be distilled they can run at superhuman speeds these collective advantages also they can communicate in latent space these collective they're
Ryan:
[13:58] Immortal i mean that you know as an example.
Dwarkesh:
[14:01] Yes exactly no i mean that's actually tell me if i'm rabbit-holing too much but like one one really interesting question will come about is how do we prosecute ais because the way we prosecute humans is that we will throw you in jail if you commit a crime but if there's trillions of copies or thousands of copies of an AI model, if a copy of an AI model, if an instance of an AI model does something bad, what do you do? Does the whole model have to get, and how do you even punish a model, right? Like, does it care about its weights being squandered? Yeah, there's all kinds of questions that arise because of the nature of what AIs are.
Josh:
[14:39] And also who is liable for that, right? Like, is it the toolmaker?
Josh:
[14:42] Is it the person using the tool? who is responsible for these things. There's one topic that I do want to come to here about scaling laws. And it's, at what time did we realize that scaling laws were going to work? Because there were a lot of theses early in the days, early 2000s about AI, how we were going to build better models. Eventually we got to the transformer. But at what point did researchers and engineers start to realize that, hey, this is the correct idea. We should start throwing lots of money and resources towards this versus other ideas that were just kind of theoretical research ideas, but never really took off. We kind of saw this with GPT 2 to 3, where there's this huge improvement. A lot of resources went into it. Was there a specific moment in time or a specific breakthrough that led to the start of these scaling laws?
Dwarkesh:
[15:20] I think it's been a slow process of more and more people appreciating this nature of the overwhelming role of compute in driving forward progress. In 2018, I believe, Dario Amadei wrote a memo that was secret while he was at OpenAI, now he's the CEO of Anthropic, but while he's OpenAI, he's subsequently revealed on my podcast that he wrote this memo where he
Dwarkesh:
[15:47] The title of the memo was called Big Blob of Compute. And it says basically what you expect it to say, which is that like, yes, there's ways you can mess up the process of training. You have the wrong kinds of data or initializations. But fundamentally, AGI is just a big blob of compute. And then we've gotten over the subsequent years, there was more empirical evidence. So a big update, I think it was 2021. But correct me. Somebody definitely will correct me in the comments. I'm wrong. There were these, there's been multiple papers of these scaling laws where you can show that the loss of the model on the objective of predicting the next token goes down very predictably, almost to like multiple decimal places of correctness based on how much more compute you throw in these models. And the compute itself is a function of the amount of data you use and how big the model is, how many parameters it has. And so that was an incredibly strong evidence back in the day, a couple years ago, because then you could say, well, okay, if it really has this incredibly low loss of predicting the next token in all human output, including scientific papers, including GitHub repositories,
Dwarkesh:
[16:59] Then doesn't it mean it has actually had to learn coding and science and all these skills in order to make those predictions, which actually ended up being true. And it was it was something people, you know, we take it for granted now, but it actually even as of a year or two ago, people were really even denying that premise. But some people a couple of years ago just like thought about it and like, yeah, actually, that would mean that it's learned the skills. And that's crazy that we just have this strong empirical pattern that tells us exactly what we need to do in order to learn these skills.
Josh:
[17:26] And it creates this weird perception, right, where like very early on and so to this day, it really is just a token predictor, right? Like we're just predicting the next word in the sentence. But somewhere along the lines, it actually creates this perception of intelligence.
Josh:
[17:38] So I guess we covered the early historical context. I kind of want to bring the listeners up to today, where we are currently, where the scaling loss have brought us in the year 2025. So can you kind of outline where we've gotten to from early days of GPTs to now we have GPT-4, we have Gemini Ultra, we have Club, which you mentioned earlier. We had the breakthrough of reasoning. So what can leading frontier models do today?
Dwarkesh:
[18:01] So there's what they can do. And then there's the question of what methods seem to be working. I guess we can start at what they seem to be able to do. They've shown to be remarkably useful at coding and not just at answering direct questions about how does this line of code work or something. But genuinely just autonomously working for 30 minutes or an hour, doing the task, it would take a front-end developer a whole day to do. And you can just ask them at a high level, do this kind of thing, and they can go ahead and do it. Obviously, if you've played around with it, you know that they're extremely useful assistants in terms of research, in terms of even therapists, whatever other use cases. On the question of what training methods seem to be working, we do seem to be getting evidence that pre-training is plateauing, which is to say that we had GPD 4.5, which was just following this old mold of make the model bigger, but it's fundamentally doing the same thing of next token prediction. And apparently it didn't pass muster. The OpenAI had to deprecate it because there's this dynamic where the bigger the model is, the more it costs not only to train, but also to serve, right? Because every time you serve a user, you're having to run the whole model, which is going, so, but that's going to be working is RL, which is this process of not just training them on existing tokens on the internet, but having the model itself try to answer math and coding problems. And finally, we got to the point where the model is smart enough to get it right some of the time, and so you can give it some reward, and then it can saturate these tough reasoning problems.
Josh:
[19:29] And then what was the breakthrough with reasoning for the people who aren't familiar? What made reasoning so special that we hadn't discovered before? And what did that kind of unlock for models that we use today?
Dwarkesh:
[19:39] I'm honestly not sure. I mean, we had GBD4 came out a little over two years ago, and then it was two years after GBD4 came out that O1 came out, which was the original reasoning breakthrough, I think last November. And subsequently, a couple months later, DeepSeq showed in their R1 paper. So DeepSeq open sourced their research and they explained exactly how their algorithm worked. And it wasn't that complicated. It was just like what you would expect, which is get some math problems, give for some initial problems, tell the model exactly what the reasoning trace looks like, how you solve it, just like write it out and then have the model like try to do it raw on the remaining problems. Now, I know it sounds incredibly arrogant to say, well, it wasn't that complicated. Why did it take you years? I think there's an interesting insight there of even things which you think will be simple in terms of high level description of how to solve the problem end up taking longer in terms of haggling out the remaining engineering hurdles than you might naively assume. And that should update us on how long it will take us to go through the remaining bottlenecks on the path to AGI. Maybe that will be tougher than people imagine, especially the people who think we're only two to three years away. But all this to say, yeah, I'm not sure why it took so long after GPT-4 to get a model trained on a similar level of capabilities that could then do reasoning.
Josh:
[20:56] And in terms of those abilities, the first answer you had to what can it do was coding. And I hear that a lot of the time when I talk to a lot of people,
Josh:
[21:04] is that coding seems to be a really strong suit and a really huge unlock to using these models. And I'm curious, why coding over general intelligence? Is it because it's placed in a more confined box of parameters. I know in the early days, we had the AlphaGo and we had the AIs playing chess, and they performed so well because they were kind of contained within this box of parameters that was a little less open-ended than general intelligence. Is that the reason why coding is kind of at the frontier right now of the ability of these models?
Dwarkesh:
[21:31] There's two different hypotheses. One is based around this idea called Moravac's Paradox.
Dwarkesh:
[21:38] And this was an idea, by the way, one super interesting figure, actually, I should have mentioned him earlier. One super interesting figure in the history of scaling is Hans Morovac, who I think in the 90s predicts that 2028 will be the year that we will get to AGI. And the way he predicts this, which is like, you know, we'll see what happens, but like, not that far off the money as far as I'm concerned. The way he predicts this is he just looks at the growth in computing power year over year, and then looks at how much compute he estimated the human brain to be to require. And then just like, okay, we'll have computers as powerful as the human brain by 2028. Which is like at once a deceptively simple argument, but also ended up being incredibly accurate and like worked, right? I might add a fact drive it was 2028, but it was within that, like within something you would consider a reasonable guess, given what we know now. Sorry, anyway, so the Moravex paradox is this idea that computers seemed in AI get better first at the skills which humans are the worst at. Or at least there's a huge variation in the human repertoire. So we think of coding as incredibly hard, right? We think this is like the top 1% of people will be excellent coders. We also think of reasoning as very hard, right? So if you like read Aristotle, he says the thing which makes humans special, which distinguishes us from animals is reasoning.
Dwarkesh:
[23:03] And these models aren't that useful yet at almost anything. The one thing they can do is reasoning. So how do we explain this pattern? and Moravac's answer is that evolution has spent billions of years optimizing us to do things we take for granted. Move around this room, right? I can pick up this can of Coke, move it around, drink from it. And that we can't even get robots to do at all yet. And in fact, it's so ingrained in us by evolution that there's no human or at least humans who don't have disabilities will all be able to do this. And so we just take it for granted that this is an easy thing to do. But in fact, it's an evidence of how long evolution has spent getting humans up to this point. Whereas reasoning, logic, all of these skills have only been optimized by evolution over the course of the last few million years. So there's been a thousand fold less evolutionary pressure towards coding than towards just basic locomotion.
Dwarkesh:
[24:04] And this has actually been very accurate in predicting what kinds of progress we see even before we got deep learning, right? Like in the 40s, when we got our first computers, the first thing that we could use them to do is long calculations for ballistic trajectories at the time for World War II. Humans suck at long calculations by hand. And anyways, so that's the explanation for coding, which seems hard for humans, is the first thing that went to AIs. Now, there's another theory, which is that this is actually totally wrong. It has nothing to do with this seeming paradox of how long evolution has optimized us for, and everything to do with the availability of data. So we have GitHub, this repository of all of human code, at least all open source code written in all these different languages, trillions and trillions of tokens. We don't have an analogous thing for robotics. We don't have this pre-training corpus. And that explains why code has made so much more progress than robotics.
Ryan:
[24:58] That's fascinating because if there's one thing that I could list that we'd want AI to be good at, probably coding software is number one on that list. Because if you have a Turing complete intelligence that can create Turing complete software, is there anything you can't create once you have that? Also, the idea of Morvac's paradox, I guess that sort of implies a certain complementarianism with humanity. So if robots can do things that robots can do really well and can't do the things humans can do well, well, perhaps there's a place for us in this world. And that's fantastic news. It also maybe implies that humans have kind of scratched the surface on reasoning potential. I mean, if we've only had a couple of million years of evolution and we haven't had the data set to actually get really good at reasoning, it seems like there'd be a massive amount of upside, unexplored territory, like so much more intelligence that nature could actually contain inside of reasoning. I mean, are these some of the implications of these ideas?
Dwarkesh:
[26:02] Yeah, I know. I mean, that's a great insight. Another really interesting insight is that the more variation there is in a skill in humans, the better and faster that AIs will get at it. Because coding is a kind of thing where 1% of humans are really good at it. The rest of us will, if we try to learn it, we'd be okay at it or something, right? And because evolution has spent so little time optimizing us, there's this room for variation where the optimization hasn't happened uniformly or it hasn't been valuable enough to sort of saturate the human gene pool for this skill. I think you made an earlier point that I thought was really interesting I wanted to address. Can you remind me of the first thing you said?
Ryan:
[26:42] Is it the complementarianism?
Dwarkesh:
[26:46] Yes. So you can take it as a positive future. You can take it as a negative future in the sense that, well, what is the complementary skills we're providing? We're good meat robots.
Ryan:
[26:57] Yeah, the low skill labor of the situation.
Dwarkesh:
[26:59] They can do all the thinking and planning. One dark future, one dark vision of the future is we'll get those meta glasses and the AI speaking into our ear and it'll tell us to go put this brick over there so that the next data center couldn't be built because the AI has got the plan for everything. It's got the better design for the ship and everything. You just need to move things around for it. And that's what human labor looks like until robotics is solved. So yeah, it depends on how you go. On the other hand, you'll get paid a lot because it's worth a lot to move those bricks. We're building AGI here. But yeah, it depends on how you come out on that question.
Ryan:
[27:32] Well, there seems to be something to that idea, going back to the idea of the massive amount of human variation. I mean, we have just in the past month or so, we have news of meta hiring AI researchers for $100 million signing bonuses, okay? What does the average software engineer make versus what does an AI researcher make at kind of the top of the market, right? Which has got to imply, obviously there's some things going on with demand and supply, but also that it does also seem to imply that there's massive variation in the quality of a software engineer. And if AIs can get to that quality, well, what does that unlock?
Ryan:
[28:07] Yeah. So, okay. Yeah. So I guess we have like coding down right now. Like another question though, is like, what can't AIs do today? And how would you characterize that? Like, what are the things they just don't do well?
Dwarkesh:
[28:20] So I've been interviewing people on my podcast who have very different timelines from rural GetAGI. I have had people on who think it's two years away and some who think it's 20 years away. And the experience of building AI tools for myself actually has been the most insight driving or maybe research I've done on the question of when AI is coming.
Ryan:
[28:41] More than the guest interviews.
Dwarkesh:
[28:43] Yeah, because you just, I have had, I've probably spent on the order of 100 hours trying to build these little tools, the kinds I'm sure you've also tried to build of like, rewrite auto-generated transcripts for me to make them sound, the rewritten the way a human would write them. Find clips for me to tweet out, write essays with me, co-write them passage by passage, these kinds of things. And what I found is that it's actually very hard to get human-like labor out of these models, even for tasks like these, which should be death center in the repertoire of these models, right? They're short horizon, they're language in, language out. They're not contingent on understanding some thing I said like a month ago. This is just like, this is the task. And I was thinking about why is it the case that I still haven't been able to automate these basic language tasks? Why do I still have a human work on these things?
Dwarkesh:
[29:31] And I think the key reason that you can't automate even these simple tasks is because the models currently lack the ability to do on the job training. So if you hire a human for the first six months, for the first three months, they're not going to be that useful, even if they're very smart, because they haven't built up the context, they haven't practiced the skills, they don't understand how the business works. What makes humans valuable is not that mainly the raw intellect obviously matters, but it's not mainly that. It's their ability to interrogate their own failures in this really dynamic, organic way, to pick up small efficiencies and improvements as they practice the task, and to build up this context as they work within a domain. And so sometimes people wonder, look, if you look at the revenue of OpenAI, the annual recurring revenue, it's on the order of $10 billion. Kohl's makes more money than that. McDonald's makes more money than that, right?
Dwarkesh:
[30:24] So why is it that if they've got AGI, they're, you know, like Fortune 500 isn't reorganizing their workflows to, you know, use OpenAI models at every layer of the stack? My answer, sometimes people say, well, it's because people are too stodgy. The management of these companies is like not moving fast enough on AI. That could be part of it. I think mostly it's not that I think mostly it genuinely is very hard to get human-like labor out of these models because you can't so you're stuck with the capabilities you get out of the model out of the box so they might be five out of ten at right rewriting the transcript for you but if you don't like how it turned out if you have feedback for it if you if you want to keep teaching it over time once the session ends the model like every every everything it knows about you has gone away you had to restart again it's like working with an amnesiac employee you had restart again.
Ryan:
[31:12] Every day is the first day of employment, basically.
Dwarkesh:
[31:15] Yeah, exactly. It's a groundhog day for them every day, or every couple of hours, in fact. And that makes it very hard for them to be that useful as an employee, right? They're not really an employee at that point. This, I think, not only is a key bottleneck to the value of these models, because human labor is worth a lot, right? Like $60 trillion in the world is paid to wages every year. If these model companies are making on the order of $10 billion a year, that's a big way to AGI. And what explains that gap? What are the bottlenecks? I think a big one is this continual learning thing. And I don't see an easy way that that just gets solved within these models. There's no like, with reasoning, you could say, oh, it's like, train it on math and code problems, and then I'll get the reasoning. And that worked. I don't think there's something super obvious there for how do you get this online learning, this on-the-job training working for these models.
Ryan:
[32:01] Okay, can we talk about that? Go a little bit deeper on that concept? So this is basically one of the concepts you wrote in your recent post. AI is not right around the corner. Even though you're an AI optimist, I would say, and overall an AI accelerationist, You were saying it's not right around the corner. You're saying the ability to replace human labor is a ways out. Not forever out, but I think you said somewhere around 2032, if you had to guess on when the estimate was. And the reason you gave is because AIs can't learn on the job, but it's not clear to me why they can't. Is it just because the context window isn't large enough? Is it just because they can't input all of the different data sets and data points that humans can? Is it because they don't have stateful memory the way a human employee? Because if it's these things, all of these do seem like solvable problems. And maybe that's what you're saying. They are solvable problems. They're just a little bit longer than some people think they are.
Dwarkesh:
[32:58] I think it's like in some deep sense a solvable problem because eventually we will build AGI. And to build AGI, we will have had to solve the problem. My point is that the obvious solutions you might imagine, for example, expanding the context window or having this like external memory using systems like rag these are basically techniques we already have to it's called retrieval augmented generate anyways these kinds of retrieval augmented generation i don't think these will suffice and just to put a finer point first of all like what is the problem the problem is exactly as you say that within the context window these models actually can learn on the job right so if you talk to it for long enough it will get much better at understanding your needs and what your exact problem is. If you're using it for research for your podcast, it will get a sense of like, oh, they're actually especially curious about these kinds of questions. Let me focus on that. It's actually very human-like in context, right? The speed at which it learns, the task and knowledge it picks out. The problem, of course, is the context length for even the best models only last a million or two million tokens. That's at most like an hour of conversation. Now, then you might say, okay, well, why can't we just solve that by expanding the context window, right? So context window has been expanding for the last few years. Why can't we just continue that?
Ryan:
[34:10] Yeah, like a billion token context window, something like this.
Dwarkesh:
[34:13] So 2018 is when the transformer came out, and the transformer has the attention mechanism. The attention mechanism is inherently quadratic with the nature, the length of the sequence, which is to say that if you go from 1 million tokens to 2 million tokens, it actually costs four times as much compute to process that two millionth token. It's not just two to as much compute. So it gets super linearly more expensive as you increase the context length. And for the last seven years, people have been trying to get around this inherent quadratic nature of attention. Of course, we don't know secretly what the labs are working on. But we have frontier companies like DeepSeq, which have open source their research, and we can just see how their algorithms work. And they found these constant time modifiers to attention, which is to say that it'll still be quadratic, but it will be like one half times quadratic. But the inherent super linearness has not gone away. And because of that, yeah, you might be able to increase it from 1 million tokens to 2 million tokens by finding another hack. Like make sure experts just want such things. Latent attention is another such technique. But, or KVCash, right, there's many other things that have been discovered. But people have not discovered okay, how do you get around the fact that if you went to a billion, it would be a billion squared is expensive in terms of compute to process that token. And so I don't think you'll just get it by increasing the length of the context window, basically.
Ryan:
[35:42] That's fascinating. Yeah, I didn't realize that. Okay, so the other reason in your post that AI is not right around the corner is because it can't do your taxes. And Dorcas, I feel your pain, man. Taxes are just like quite a pain in the ass. I think you were talking about this from the context of like computer vision, computer use, that kind of thing. Right. So, I mean, I've seen demos. I've seen some pretty interesting computer vision
Ryan:
[36:05] sort of demos that seem to be right around the corner. But like, what's the limiter on computer use for an AI?
Dwarkesh:
[36:12] There was an interesting blog post by this company called Mechanize where they were explaining why this is such a big problem. And I love the way they phrased it, which is that. Imagine if you had to train a model in 1980, large language model in 1980, and you could use all the compute you wanted in 1980 somehow, but you didn't have, you were only stuck with the data that was available in the 1980s, of course, before the internet became a widespread phenomenon. You couldn't train a modern LLM, even with all the compute in the world, because the data wasn't available. And we're in a similar position with respect to computer use, because there's not this corpus of collected videos of people using computers to do different things, to access different applications and do white collar work. Because of that, I think the big challenge has been accumulating this kind of data. off.
Ryan:
[37:06] And to be clear, when I was saying the use case of like, do my taxes, you're effectively talking about an AI having the ability to just like, you know, navigate the files around your computer, you know, log in to various websites to download your pay stubs or whatever, and then to go to like TurboTax or something and like input it all into some software and file it, right? Just on voice command or something like that. That's basically doing my taxes.
Dwarkesh:
[37:31] It should be capable of navigating UIs that it's less familiar with or that come about organically within the context of trying to solve a problem. So for example, you know, I might have business deductions. It sees on my bank statement that I've spent $1,000 on Amazon. It goes logs in my Amazon. It sees like, oh, he bought a camera. So I think that's probably a business expense for his podcast. He bought an Airbnb over a weekend in the cabins of whatever, in the woods of whatever. That probably wasn't a business expense. Although maybe maybe it's if it's a sort of like a gray it was willing to go in the gray area yeah yeah do
Ryan:
[38:08] The gray area stuff.
Dwarkesh:
[38:09] I was i was researching But anyway, so that including all of that, including emailing people for invoices and haggling with them, it would be like a sort of week long task to do my taxes, right? You'd have to, there's a lot of work involved. That's not just like, do this skill, this skill, this skill, but rather of having a sort of like plan of action and then breaking tasks apart, dealing with new information, new emails, new messages, consulting with me about questions, et cetera.
Ryan:
[38:38] Yeah. I mean, to be clear on this use case too, even though your post is titled like, you know, AI is not right around the corner. You still think this ability to file your taxes, that's like a 2028 thing, right? I mean, this is maybe not next year, but it's in a few years.
Dwarkesh:
[38:54] Right. Which is, I think that was sort of, people maybe read too much in the decital and then didn't read through the arguments.
Ryan:
[39:00] That never happens on the internet. Wow. First time.
Dwarkesh:
[39:04] No, I think like, I'm arguing against people who are like, you know, this will happen. AGI is like two years away. I do think the wider world, the markets, public perception, even people who are somewhat attending to AI, but aren't in this specific milieu that I'm talking to, are way underpricing AGI. One reason, one thing I think they're underestimating is not only will we have millions of extra laborers, millions of extra workers, potentially billions, within the course of the next decade, because then we will have a potentially, I think like likely we will have AGI within the next decade. But they'll have these advantages that human workers don't have, which is that, okay, a single model company, so suppose we solve continual learning, right? And we solve computer use. So as far as white collar work goes, that might fundamentally be solved. You can have AIs which can use not just they're not just like a text box where you put into you ask questions in a chatbot and you get some response out. It's not that useful to just have a very smart chatbot. You need it to be able to actually do real work and use real applications.
Dwarkesh:
[40:08] Suppose you have that solved because it acts like an employee. It's got continual learning. It's got computer use. But it has another advantage that humans don't have, which is that copies of this model are going being deployed all through the economy and it's doing on the job training. So copies are learning how to be an accountant, how to be a lawyer, how to be a coder. Except because it's an AI and it's digital, the model itself can amalgamate all this on-the-job training from all these copies. So what does that mean? Well, it means that even if there's no more software progress after that point, which is to say that no more algorithms are discovered, there's not a transformer plus plus that's discovered. Just from the fact that this model is learning every single skill in the economy, at least for white-collar work, you might just, based on that alone, have something that looks like an intelligence explosion. It would just be a broadly deployed intelligence explosion, but it would functionally become super intelligent just from having human-level capability of learning on the job.
Josh:
[41:03] Yeah, and it creates this mesh network of intelligence that's shared among everyone. That's a really fascinating thing. So we're going to get there. We're going to get to AGI. it's going to be incredibly smart. But what we've shared recently is just kind of this mixed bag where currently today, it's pretty good at some things, but also not that great at others. We're hiring humans to do jobs that we think AI should do, but it probably doesn't. So the question I have for you is, is AI really that smart? Or is it just good at kind of acing these particular benchmarks that we measure against? Apple, I mean, famously recently, they had their paper, The Illusion of Thinking, where it was kind of like, hey, AI is like pretty good up to a point, but at a certain point, it just falls apart. And the inference is like, maybe it's not, intelligence, maybe it's just good at guessing. So I guess the question is, is AI really that smart?
Dwarkesh:
[41:46] It depends on who I'm talking to. I think some people overhype its capabilities. I think some people are like, oh, it's already AGI, but it's like a little hobbled little AGI where we're like sort of giving it a concussion every couple of hours and like it forgets everything. We're like trapped in a chatbot context. But fundamentally, the thing inside is like a very smart human. I disagree with that perspective. So if that's your perspective, I say like, no, it's not that smart. Your perspective is just statistical associations. I say definitely smarter. Like it's like genuinely there's an intelligence there.
Dwarkesh:
[42:17] And the, so one thing you could say to the person who thinks that it's already AGI is this. Look, if a single human had as much stuff memorized as these models seem to have memorized, right? Which is to say that they have all of internet text, everything that human has written on the internet memorized. They would potentially be discovering all kinds of connections and discoveries. They'd notice that this thing which causes a migraine is associated with this kind of deficiency. So maybe if you take the supplement, your migraines will be cured. There'd be just this list of just like trivial connections that lead to big discoveries all through the place. It's not clear that there's been an unambiguous case of an AI just doing this by itself. So then why? So that's something potentially to explain, like if they're so intelligent, why aren't they able to use their disproportionate capabilities, their unique capabilities to come up with these discoveries? I don't think there's actually a good answer to that question yet, except for the fact that they genuinely aren't that creative. Of maybe they're like intelligent in the sense of knowing a lot of things, but they don't have this fluid intelligence that humans have. Anyway, so I give you a wish-washy answer because I think some people are underselling the intelligence. Some people are overselling it.
Ryan:
[43:25] I recall a tweet lately from Tyler Cowen. I think he's referring to maybe O3 and he basically said it feels like AGI. I don't know if it is AGI or not, but like to me, it feels like AGI. What do you account for this feeling of, like, intelligence then?
Dwarkesh:
[43:40] I think this is actually very interesting because it gets to a crux that Tyler and I have. So Tyler and I disagree on two big things. One, he thinks, you know, as he said in the blog post, 03 is AGI. I don't think it's AGI. I think it's it's orders of magnitude less valuable or, you know, like many orders of magnitude less valuable and less useful than an AGI. That's one thing we disagree on. The other thing we disagree on is he thinks that once we do get AGI, we'll only see 0.5% increase in the economic growth rate. This is like what the internet caused, right? Whereas I think we will see tens of percent increase in economic growth. Like it will just be the difference between the pre-industrial revolution rate of growth versus industrial revolution, that magnitude of change again. And I think these two disagreements are linked, because if you do believe we're already at AGI, and you look around the world and you say like, well, it fundamentally looks the same, you'd be forgiven for thinking like, oh, there's not that much value in getting to AGI. Whereas if you are like me and you think like, no, we'll get this broadly at the minimum, at a very minimum, we'll get a broadly deployed intelligence explosion once we get to AGI, then you're like, OK, I'm just expecting some sort of singulitarian crazy future with a robot factories and, you know, solar farms all across the desert and things like that.
Ryan:
[44:54] Yeah, I mean, it strikes me that your disagreement with Tyler is just based on the semantic definition of like what AGI actually is. And Tyler, it sounds like he has kind of a lower threshold for what AGI is, whereas you have a higher threshold. Is there like a accepted definition for AGI?
Dwarkesh:
[45:11] No. One thing that's useful for the purposes of discussions is to say automating all white collar work because robotics hasn't made as much progress as LLMs
Dwarkesh:
[45:21] have or computer use has. So if we just say anything a human can do, or maybe 90% of what humans can do at a desk, an AI can also do, that's potentially a useful definition for at least getting the cognitive elements relevant to defining AGI. But yeah, there's not one definition which suits all purposes.
Ryan:
[45:41] Do we know what's like going on inside of these models, right? So like, you know, Josh was talking earlier in the conversation about like this at the base being sort of token prediction, right? And I guess this starts to raise the question of like, what is intelligence in the first place? And these AI models, I mean, they seem like they're intelligent, but do they have a model of the world the way maybe a human might? Are they sort of babbling or like, is this real reasoning? And like, what is real reasoning? Do we just judge that based on the results or is there some way to like peek inside of its head?
Dwarkesh:
[46:18] I used to have similar questions a couple of years ago. And then, because honestly, the things they did at the time were like ambiguous. You could say, oh, it's close enough to something else in this trading dataset. That is just basically copy pasting. It didn't come up with a solution by itself. But we've gotten to the point where I can come up with a pretty complicated math problem and it will solve it. It can be a math problem, like not like, you know, undergrad or high school math problem. Like the problem we get, the problems the smartest math professors come up with in order to test International Math Olympiad, you know, the kids who spend all their life preparing for this, the geniuses who spend all their life, all their young adulthood preparing to take these really gnarly math puzzle challenges. And the model will get these kinds of questions right. They require all this abstract creative thinking, this reasoning for hours. The model will get the right. Okay, so if that's not reasoning, then why is reasoning valuable again? Like what exactly was this reasoning supposed to be? So I think they genuinely are reasoning. I mean, I think there's other capabilities they lack, which are actually more, in some sense, they seem to us to be more trivial, but actually much harder to learn. But the reasoning itself, I think, is there.
Josh:
[47:30] And the answer to the intelligence question is also kind of clouded, right? Because we still really don't understand what's going on in an LLM. Dario from Anthropic, he recently posted the paper about interpretation. And can you explain why we don't even really understand what's going on in these LLMs, even though we're able to make them and yield the results from them?
Dwarkesh:
[47:48] Hmm.
Josh:
[47:49] Because it very much still is kind of like a black box. We write some code, we put some inputs in, and we get something out, but we're not sure what happens in the middle, why it's creating this output.
Dwarkesh:
[47:59] I mean, it's exactly what you're saying. It's that in other systems we engineer in the world, we have to build it up bottom up. So if you build a bridge, you have to understand how every single beam is contributing to the structure. And we have equations for why the thing will stay standing. There's no such thing for AI. We didn't build it, more so we grew it. It's like watering a plant. And a couple thousand years ago, they were doing agriculture, but they didn't know why. Why do plants grow? How do they collect energy from sunlight? All these things. And I think we're in a substantially similar position with respect to intelligence, with respect to consciousness, with respect to all these other interesting questions about how minds work, which is in some sense really cool because there's this huge intellectual horizon that's become not only available, but accessible to investigation.
Dwarkesh:
[48:55] In another sense, that's scary because we know that minds can suffer. We know that minds have moral worth and we're creating minds and we have no understanding of what's happening in these minds. Is a process of gradient descent a painful process? We don't know, but we're doing a lot of it. So hopefully we'll learn more. But yeah, I think we're in a similar position to some farmer in Uruk in 3500 B.C., Wow.
Ryan:
[49:21] And I mean, the potential, the idea that minds can suffer, minds have some moral worth, and also minds have some free will. They have some sort of autonomy, or maybe at least a desire to have autonomy. I mean, this brings us to kind of this sticky subject of alignment and AI safety and how we go about controlling the intelligence that we're creating, if even that's what we should be doing, controlling it. And we'll get to that in a minute. But I want to start with maybe the headlines here. a little bit. So headline just this morning, latest OpenAI models sabotaged a shutdown mechanism despite commands to the contrary. OpenAI's O1 model attempted to copy itself to external servers after being threatened with shutdown. They denied the action when discovered. I've read a number of papers about this. Of course, mainstream media has these types of headlines almost on a weekly basis now, and it's starting to get to daily.
Ryan:
[50:16] But there does seem to be some evidence that AIs lie to us, if that's even the right term, in order to pursue goals, goals like self-preservation, goals like replication, even deep-seated values that we might train into them, sort of a constitution type of value. They seek to preserve these values, which maybe that's a good thing, or maybe it's not a good thing if we don't actually want them to interpret the values in a certain way. Some of these headlines that we're seeing now, To you, with your kind of corpus of knowledge and all of the interviews and discovery you've done on your side, is this like media sensationalism or is this like alarming? And if it's alarming, how concerned should we be about this?
Dwarkesh:
[50:59] I think on net, it's quite alarming. I do think that some of these results have
Dwarkesh:
[51:04] been sort of cherry picked. Or if you look into the code, what's happened is basically the researchers have said, hey, pretend to be a bad person. Wow, AI is being a bad person. Isn't that crazy? But the system prompt is just like hey do this bad thing right now i personally but i have also seen other results which are not of this quality i mean the the clearest example so backing up what is the reason to think this will be a bigger problem in the future than it is now because we all interact with these systems and they're actually like quite moral or aligned right like you can talk to a chatbot and you like ask it to how should you deal with some crisis where there's a correct answer.
Dwarkesh:
[51:45] It will tell you not to be violent. It'll give you reasonable advice. It seems to have good values. So it's worth noticing this and being happy about it. The concern is that we're moving from a regime where we've trained them on human language, which implicitly has human morals and the way normal people think about values implicit in it, plus this RLHF process we did to a regime where we're mostly spending compute on just having them answer problems yes or no or correct or not rather. Just like, And pass all the unit tests, get the right answer on this math problem. And this has no guardrails intrinsically in terms of what is allowed to do, what is the proper moral way to do something.
Dwarkesh:
[52:34] I think that can be a load of term, but here's a more concrete example. One problem we're running into with these coding agents more and more, and this has nothing to do with these abstract concerns about alignment, but more so just like how do we get economic value out of these models, is that Claude or Gemini will, instead of writing code such that it passes the unit tests, it will often just delete the unit tests so that the code just passes by default. Now, why would it do that? Well, it's learned in the process. It was trained on the goal during training of you must pass all unit tests. And probably within some environment in which it was trained, it was able to just get away. Like, it wasn't designed well enough. And so it found this like little hole where it could just like delete the file that had the unit test or rewrite them so that it always said, you know, equals true, then pass.
Dwarkesh:
[53:22] And right now we can discover these even without even though we can discover these, you know, it's still past. There's still been enough hacks like this such that the model is like becoming more and more hacky like that in the future we're going to be training models in ways that we is beyond our ability to even understand certainly beyond everybody's ability to understand maybe a few people who might be able to see just the way that right now if you come up with a new math proof for some open problem in mathematics there will be only be a few people in the world who will be able to evaluate that math proof we'll be in a similar position with respect to all the things that these models are being trained on at the frontier, especially math and code because humans were big dum-dums with respect to this reasoning stuff. And so there's a sort of like first principles reason to expect that this new modality of training will be less amenable to the kinds of supervision that was grounded within the pre-training corpus.
Ryan:
[54:12] I don't know that everyone has kind of an intuition or an idea why it doesn't work to just say like, so if we don't want our AI models to lie to us, why can't we just tell them not to lie? Why can't we just put that as part of their core constitution? If we don't want our AI models to be sycophants, why can't we just say, hey, if I tell you I want the truth not to flatter me, just give me the straight up truth. Why is this even difficult to do?
Dwarkesh:
[54:40] Well, fundamentally, it comes down to how we train them. And we don't know how to train them in a way that does not reward lying or sycophancy. In fact, the problem is OpenAI, they explained why their recent model of theirs, which they had to take down, was just sycophantic. And the reason was just that they rolled out, they did an A-B test. And the version, the test that was more sycophantic was just preferred by users more.
Ryan:
[55:04] Sometimes you prefer the lie.
Dwarkesh:
[55:06] Yeah, so if that's what's preferred in training, or for example, in the context of lying, if we've just built RL environments in which we're training these models, where they're going to be more successful if they lie, right? So if they delete the unit tests and then tell you, I passed this program and all the unit tests have succeeded, it's like lying to you, basically. And if that's what is rewarded in the process of gradient descent, then it's not surprising that the model you interact with will just have this drive to lie if it gets it closer to its goal. And I would just expect this to keep happening unless we can solve this fundamental problem that comes about in training.
Josh:
[55:49] So you mentioned how like ChatGPT had a version that was sycophantic, and that's because users actually wanted that. Who is in control? Who decides the actual alignment of these models? Because users are saying one thing and then they deploy it. And then it turns out that's not actually what people want. How do you kind of form consensus around this alignment or these alignment principles?
Dwarkesh:
[56:10] Right now, obviously, it's the labs who decide this, right? And the safety teams of the labs. And I guess the question you could ask is then who should decide these? Because this will be assuming
Josh:
[56:19] The trajectory yeah so we keep going to get more powerful
Dwarkesh:
[56:22] Because this will be the key modality that all of us use to get not only get worked on but even like i think at some point a lot of people's best friends will be ais at least functionally in the sense of who do they spend the most amount of time talking to it might already be ais this will be the key layer in your business that you're using to get a work done so this this process of training which shapes their personality who gets to control it I mean it will be the laughs functionally But maybe you mean like who should control it, right? I honestly don't know. I mean, I don't know if there's a better alternative to the labs.
Josh:
[56:59] Yeah, I would assume like there's some sort of social consensus, right? Similar to how we have in America, the Constitution. There's like this general form of consensus that gets formed around how we should treat these models as they become as powerful as we think they probably will be.
Dwarkesh:
[57:10] Honestly, I don't have, I don't know if anybody has a good answer about how you do this process. I think we lucked out, we just like really lucked out with the Constitution. It also wasn't a democratic process which resulted in the Constitution, even though it instituted it Republican form of government. It was just delegates from each state. They haggled it out over the course of a few months. Maybe that's what happens with AI. But is there some process which feels both fair and which will result in actually a good constitution for these AIs? It's not obvious to me that, I mean, nothing comes up to the top of my head, like, oh, do ranked choice voting or something.
Josh:
[57:44] Yeah. So I was going to ask, is there any, I mean, having spoken to everyone who you've spoken to, is there any alignment path which looks most promising which feels the most comforting and exciting to you
Dwarkesh:
[57:52] I i think alignment in the sense of you know and eventually we'll have these super intelligent systems what do we do about that i think the the approach that i think is most promising is less about finding some holy grail some you know giga brain solution some equation which solves the whole puzzle. And more like, one, having this Swiss cheese approach where, look, we kind of have gotten really good at jailbreaks.
Dwarkesh:
[58:24] I'm sure you've heard a lot about jailbreaks over the last few years. It's actually much harder to jailbreak these models because people try to whack at these things in different ways. Model developers just patched these obvious ways to do jailbreaks. The model also got smarter, so it's better able to understand when somebody's trying to jailbreak into it.
Dwarkesh:
[58:42] That, I think, is one approach. Another is, I think, competition. I think the scary version of the future is where you have this dynamic where a single model and its copies are controlling the entire economy. When politicians want to understand what policies to pass, they're only talking to copies of a single model. If there's multiple different AI companies who are at the frontier, who have competing services, and whose models can monitor each other, right? So Claude may care about its own copies being successful in the world, and it might be able to willing to lie on their behalf, even if you ask one copy to supervise another, I think you get some advantage from a copy of OpenAI's model monitoring a copy of DeepSeq's model, which actually brings us back to the Constitution, right? One of the most brilliant things in the Constitution is the system of checks and balances. So some combination of the Swiss cheese approach to model development and training and alignment, where you're careful, if you notice this kind of reward hacking, you do your best to solve it. You try to keep as much of the models thinking in human language rather than letting it think in AI thought in this lane space thinking. And the other part of it is just having normal market competition between these companies so that you can use them to check each other.
Dwarkesh:
[59:53] And no one company or no one AI is dominating the economy or advisory roles for governments.
Ryan:
[1:00:04] I really like this like bundle of ideas that you sort of put together in that, because like, I think a lot of the, you know, AI safety conversation is always couched in terms of control. Like we have to control the thing that is the way. And I always get a little worried when I hear like terms like control. And it reminds me of a blog post I think you put out, which I'm hopeful you continue to write on. I think you said it was going to be like one of a series, which is this idea of like classical liberal AGI. And we were talking about themes like balance of power. Let's have Claude, you know, check in with ChatGPT and monitor it.
Ryan:
[1:00:39] When you have themes like transparency as well, that feels a bit more classically liberal coded than maybe some of the other approaches that I've heard. And you wrote this in the post, which I thought was kind of, it just sparked my interest because I'm not sure where you're going to go next with this. But you said the most likely way this happens, that is AIs have a stake in humanity's future, is if it's in the AI's best interest to operate within our existing laws and norms. You have this whole idea that like, hey, the way to get true AI alignment is to make it easy, make it the path of least resistance for AI to basically partner with humans. It's almost this idea if like the aliens landed or something, we would create treaties with the aliens, right? We would want them to adopt our norms. We would want to initiate trade with them. Our first response shouldn't be, let's try to dominate and control them. Maybe it should be let's try to work with them. Let's try to collaborate. Let's try to open up trade. What's your idea here? And like, are you planning to write further posts about this?
Dwarkesh:
[1:01:44] Yeah, I want to. It's just such a hard topic to think about that, you know, something always comes up. But the fundamental point I was making is, look, in the long run, if AIs are, you know, human labor is going to be obsolete because of these inherent advantages that digital minds will have, and robotics will eventually be solved. So our only leverage on the future will no longer come from our labor. It will come from our legal and economic control over the society that AIs will be participating in, right? So AIs might make the economy explode in the sense of grow a lot. And for humans to benefit from that, it would have to be the case that AI still respects your equity in the S&P 500 companies that you bought, right? Or for the AIs to follow your laws, which say that you can't do violence onto humans and you had to respect humans' properties.
Dwarkesh:
[1:02:43] It would have to be the case that AIs are actually bought into our system of government, into our laws and norms. And for that to happen, the way that likely happens is if it's just like the default path for the AIs as they're getting smarter and they're developing their own systems of enforcement and laws to just participate in human laws and governments. And the metaphor I used here is right now you pay half your paycheck in taxes, probably half of your taxes in some way just go to senior citizens, right? Medicare and Social Security and other programs like this. And it's not because you're in some deep moral sense aligned with senior citizens. It's not like you're spending all your time thinking about like, my main priority in life is to earn money for senior citizens. It's just that you're not going to overthrow the government to get out of paying this tax. And so...
Ryan:
[1:03:44] Also, I happen to like my grandmother. She's fantastic. You know, it's those reasons too.
Dwarkesh:
[1:03:48] But yeah, so that's why you give money to your grandmother directly. But like, why are you giving money to some retiree in Illinois? Yes. Yes. Yeah, it's like, okay, you could say it's like, sometimes people, some people are swindled to that post by saying like, oh no, I like deeply care about the system of social welfare. I'm just like, okay, maybe you do, but I don't think like the average person is giving away hundreds of thousands of dollars a year, tens of thousands of dollars a year to like some random stranger they don't know, who's like, who's not like especially in need of charity, right? Like most senior citizens have some savings. It's just, it's just because this is a law and you like, you give it to them or you'll get, go to jail. But fundamentally, if the tax was like 99%, you would, like, you would, maybe you wouldn't overthrow the government, you'd just, like, leave the jurisdiction, you'd, like, emigrate somewhere. And AIs can potentially also do this, right? There's more than one country, there's countries which will be more AI forward, and it would be a bad situation to end up in, where...
Dwarkesh:
[1:04:42] All this explosion in AI technology is happening in the country, which is doing the least amount to protect humans' rights and to provide some sort of monetary compensation to humans once their labor is no longer valuable. So our labor could be worth nothing, but because of how much richer the world is after AI, you have these billions of extra researchers, workers, etc. It could still be trivial to have individual humans have the equivalent of millions, even billions of dollars worth of wealth. In fact, it might literally be invaluable amounts of wealth in the following sense. So here's an interesting thought experiment.
Dwarkesh:
[1:05:24] Imagine you have this choice. You can go back to the year 1500, but you know, of course, the year 1500 kind of sucks. You have no antibiotics, no TV, no running water. But here's how I'll make it up to you. I can give you any amount of money, but you can only use that amount of money in the year 1500. And you'll go back with these sacks of gold. How much money would I have to give you that you can use in the year 1500 to make you go back? And plausibly the answer is there's no amount of money you would rather in the year 1500 than just have a normal life today. And we could be in a similar position with regards to the future, where there's all these different, I mean, you'll have much better health, like physical health, mental health, longevity, that's just like the thing we can contemplate now. But people in 1500 couldn't contemplate the kinds of quality of life advances we would have 500 years later, right? So anyways, this is all to say that this could be our future for humans, even if our labor isn't worth anything. But it does require us to have AIs that choose to participate or in some way incentivize to participate in some system which we have leverage over.
Ryan:
[1:06:34] Yeah, I find this just such a fast, I'm hopeful we do some more exploration around this because I think what you're calling for is basically like, what you would be saying is invite them into our property rights system. I mean, there are some that are calling in order to control AI, they have great power, but they don't necessarily have capability. So we shouldn't allow AI to hold money or to have property. I think you would say, no, actually, the path forward to alignment is allow AI to have some vested interest in our property rights system and some stake in our governance, potentially, right? The ability to vote, almost like a constitution for AIs. I'm not sure how this would work, but it's a fascinating thought experiment.
Dwarkesh:
[1:07:14] And then I will say one thing So I think this could end disastrously If we give them a stake in a property system But we let them play us off each other So if you think about There's many cases in history where the British, initially, the East India Trading Company was genuinely a trading company that operated in India. And it was able to play off, you know, it was like doing trade with different, different, you know, provinces in India. There was no single powerful leader.
Dwarkesh:
[1:07:46] And by playing, you know, by doing trade with one of them, leveraging one of their armies, et cetera, they were able to conquer the continent. Similar thing could happen to human society. The way to avoid such an outcome at a high level is involves us playing the AIs off each other instead right so this is why I think competition is such a big part of the puzzle having different AIs monitor each other having this bargaining position where there's not just one company that's at the frontier another example here is if you think about how the Spanish conquered all these new world empires it's actually so crazy that a couple hundred conquistadors would show up and conquer a nation of 10 million people, the Incas, Aztecs? And why were they able to do this? Well, one of the reasons is the Spanish were able to learn from each of their previous expeditions, whereas the Native Americans were not. So Cortes learned from how Cuba was subjugated when he conquered the Aztecs.
Dwarkesh:
[1:08:44] Pizarro was able to learn from how Cortes conquered the Aztecs when he conquered the Incas. The Incas didn't even know the Aztecs existed. So eventually there was this uprising against Pizarro and Manco Inca led an insurgency where they actually did figure out how to fight horses, how to fight people, you know, people in armor on horses, don't fight them on flat terrain, throw rocks down at them, et cetera. But by this point, it was too late. If they knew this going into the battle, the initial battle, they might've been able to fend off. Because, you know, just as the conquistadors only arrived at a few hundred soldiers, we're going to the age of AI with a tremendous amount of leverage. We literally control all the stuff, right?
Dwarkesh:
[1:09:23] But we just need to lock in our advantage. We just need to be in a position where, you know, they're not going to be able to play us off each other. We're going to be able to learn what their weaknesses are. And this is why I think one good idea, for example, would be that, look, DeepSeek is a Chinese company. It would be good if, suppose DeepSeek did something naughty, like the kinds of experiments we're talking about right now, where it hacks the unit tests or so forth. I mean, eventually these things will really matter. Like Xi Jinping is listening to AIs because they're so smart and they're so capable. If China notices that their AIs are doing something bad, or they notice a failed coup attempt, for example, it's very important that they tell us, And we tell them if we notice something like that on our end, it would be like the Aztecs and Incas talking to each other about like, you know, this is what happens. This is how you fight. This is how you fight horses. This is the kind of tactics and deals they try to make with you. Don't trust them, etc. It would require cooperation on humans part to have this sort of red telephone. So during the Cold War, there's this red telephone between America and the Soviet Union after the human missile crisis, where just to make sure there's no misunderstandings, they're like, okay, if we think something's going on, let's just hop on the call. I think we should have a similar policy with respect to these kinds of initial warning signs we'll get from AI so that we can learn from each other.
Josh:
[1:10:41] Awesome. Okay. So now that we've described this artificial gender intelligence, I want to talk about how we actually get there. How do we build it? And a lot of this we've been discussing kind of takes place in this world of bits. But you have this great chapter in the book called Inputs, which discusses the physical world around us, where you can't just write a few strings of code. You actually have to go and move some dirt and you have to ship servers places and you need to power it and you need physical energy from meat space. And you kind of describe these limiting factors where we have compute, we have energy, we have data. What I'm curious to know is, do we have enough of this now? Or is there a clear path to get there in order to build the AGI? Basically, what needs to happen in order for us to get to this place that you're describing?
Dwarkesh:
[1:11:20] We only have a couple more years left of this scaling, this exponential scaling before we're hitting these inherent roadblocks of energy and our ability to manufacture ships, which means that if scaling is going to work to deliver us AGI, it has to work by 2028. Right. Otherwise, we're just left with mostly algorithmic progress. But even within algorithmic progress, the sort of low hanging fruit in this deep learning paradigm is getting more and more plucked. So then the odds per year of getting to AGI diminish a lot. Right. So there is this weird, funny, funny thing happening right now where we either discover AGI within the next few years or there's this or the yearly probability craters. And then we might be looking at decades of further research that's required in terms of algorithms to get to AGI. I am of the opinion that some algorithmic progress is necessarily needed because there's no easy way to solve continual learning just by making the context length bigger or just by doing RL. That being said, I just think the progress so far has been so remarkable that, you know, 2032 is very close. My time must be slightly longer than that, but I think it's extremely plausible that we're going to see a broadly deployed intelligence explosion within the next 10 years.
Josh:
[1:12:35] And one of these key inputs is energy, right? And I feel like one of the things that I've been hearing a lot, I actually heard it mentioned on your podcast, is the United States relative to China on this particular place of energy, where China is adding, what is the stat?
Josh:
[1:12:48] I think it's one United States worth of energy every 18 months. And their plan is to go from three to eight terawatts of power versus the United States, one to two terawatts of power by 2030. So given that context of that one resource alone, is China better equipped to get to that place versus the United States?
Dwarkesh:
[1:13:06] So right now, America has a big advantage in terms of chips. You know, China doesn't have the ability to manufacture leading edge semiconductors. And these are the chips that go into the these you need these dyes in order to have the kinds of AI chips to you need millions of them in order to have a frontier AI system.
Dwarkesh:
[1:13:30] Eventually, China will catch up in this arena as well, right? Their technology will catch up. So the export controls will keep us ahead in this category for five, 10 years. But if we're looking in the world where timelines are long, which is to say that AGI isn't just right around the corner, they will have this overwhelming energy advantage and they'll have caught up in chips. So then the question is, like, why wouldn't they win at that point? So the longer you think we're away from AGI, the more it looks like China's game to lose. I mean, if you look in the nitty gritty, I think it's more about having centralized sources of power because you need to train the AI in one place. This might be changing with RL, but it's very important to have a single site which has a gigawatt, two gigawatts more power. And if we ramped up natural gas, you know, you can get generators and natural gas and maybe it's possible to do a last ditch effort, even if our overall energy as a country is lower than China's. The question is whether we will have the political will to do that. I think people are sort of underestimating how much of a backlash there will be against AI. The government needs to make proactive efforts in order to make sure that America stays at the leading edge in AI from zoning of data centers to how copyright is handled for data for these models. And if we mess up, if it becomes too hard to develop in America, I think it would genuinely be China's game to lose.
Ryan:
[1:14:56] And do you think this narrative is right, that whoever wins the AGI war, kind of like whoever gets to AGI first, just basically wins the 21st century? Is it that simple?
Dwarkesh:
[1:15:05] I don't think it's just a matter of training the frontier system. I think people underestimate how important it is to have the compute available to run these systems. Because eventually once you get to AGI, just think of it like a person. And what matters then is how many people you have. I mean, it actually is the main thing that matters today as well, right? Like, why could China take over Taiwan if it wanted to? And if it didn't have America, you know, America, it didn't think America would intervene. But because Taiwan has 20 million people or on the order of 20 million people, and China has 1.4 billion people. You could have a future where if China has way more compute than us, but equivalent levels of AI, it would be like the relationship between China and Taiwan. Their population is functionally so much higher. This just means more research, more factories, more development, more ideas. So this inference capacity, this capacity to deploy AIs will actually probably be the thing that determines who wins the 21st century.
Ryan:
[1:16:06] So this is like the scaling law applied to, I guess, nation state geopolitics, right? And it's back to compute plus data wins. If compute plus data wins superintelligence, compute plus data also wins geopolitics.
Dwarkesh:
[1:16:23] Yep. And the thing to be worried about is that China, speaking of compute plus data, China also has a lot more data on the real world, right? If you've got entire megalopolises filled with factories where you're already deploying robots and different production systems which use automation, you have in-house this process knowledge you're building up which the AIs can then feed on and accelerate. That equivalent level of data we don't have in America. So, you know, this could be a period in which those technological advantages or those advantages in the physical world manufacturing could rapidly compound for China. And also, I mean, their big advantage as a civilization and society, at least in recent decades, has been that they can do big industrial projects fast and efficiently. That's not the first thing you think of when you think of America. And AGI is a huge industrial, high CapEx Manhattan project, right? And this is the kind of thing that China excels at and we don't. So, you know, I think it's like a much tougher race than people anticipate.
Ryan:
[1:17:30] So what's all this going to do for the world? So once we get to the point of AGI, we've talked about GDP and your estimate is more on less on the Tyler Cowen kind of, you know, half a percent per year and more on, I guess,
Ryan:
[1:17:41] the Satya Nadella from Microsoft, what does he say? Seven to eight percent. Once we get to AGI, what about unemployment? Does this cause mass, I guess, you know, like job loss across the economy or do people adopt? Like, what's your take here? And like, what? Yeah. What are you seeing?
Dwarkesh:
[1:17:58] Yeah, I mean, definitely will cause job loss. I think people who don't, I think a lot of AI leaders try to gloss over that or something. And like, I mean, what do you mean? Like, what does AGI mean if it doesn't cause job loss, right? If it does what a human does and does it cheaper and better and faster, like why would that not cause job loss?
Dwarkesh:
[1:18:14] The positive vision here is just that it creates so much wealth, so much abundance, that we can still give people a much better standard of living than even the wealthiest people today, even if they themselves don't have a job. The future I worry about is one where instead of creating some sort of UBI that will get exponentially bigger as society gets wealthier, we try to create these sorts of guild-like protection rackets where if the coders got unemployed,
Dwarkesh:
[1:18:49] Then we're going to make these bullshit jobs just for the coders and this is how we give them a redistribution. Or we try to expand Medicaid for AI, but it's not allowed to procure all of these advanced medicines and cures that AI is coming up with, rather than just giving people, you know, maybe lump sums of money or something. So I am worried about the future where instead of sharing this abundance and just embracing it, we just have these protection rackets that maybe let a few people have access to the abundance of AI. So maybe like if you sue AI, if you sue the right company at the right time, you'll get a trillion dollars, but everybody else is stuck with nothing. I want to avoid that future and just be honest about what's coming and make programs that are simple and acknowledge how fast things will change and are forward-looking rather than trying to turn what already exists into something amenable to the displacement that AI will create.
Ryan:
[1:19:51] That argument reminds me of, I don't know if you read the essay recently, it came out called The Intelligence Curse. Did you read that? It was basically the idea of applying kind of the nation state resource curse to the idea of intelligence. So like nation states that are very high in natural resources, they just have a propensity. I mean, an example is kind of like a Middle Eastern state with lots of oil reserves, right? They have this rich source of a commodity type of abundance. They need their people less. And so they don't invest in citizens' rights. They don't invest in social programs. The authors of the intelligence curse were saying that there's a similar type of curse that could happen once intelligence gets very cheap, which is basically like the nation state doesn't need humans anymore. And those at the top, the rich, wealthy corporations, they don't need workers anymore. So we get kind of locked in this almost feudal state where, you know, everyone has the property that their grandparents had and there's no meritocracy and sort of the nation states don't reinvest in citizens. Almost some similar ideas to your idea that like, you know, that the robots might want us just, or sorry, the AIs might just want us for our meat hands because they don't have the robotics technology on a temporary basis.
Ryan:
[1:21:05] What do you think of this type of like future? Is this possible?
Dwarkesh:
[1:21:08] I agree that that is like definitely more of a concern given that humans will not be directly involved in the economic output that will be generated in the CIA civilization. The hopeful story you can tell is that a lot of these Middle Eastern resource, you know, Dutch disease is another term that's used, countries, the problem is that they're not democracies, so that this wealth can just be, the system of government just lets whoever's in power extract that wealth for themselves. Whereas there are countries like Norway, for example, which also have abundant resources, who are able to use those resources to have further social welfare programs, to build sovereign wealth funds for their citizens, to invest in their future,
Dwarkesh:
[1:21:48] We are going into, at least some countries, America included, will go into the age of AI as a democracy. And so we, of course, will lose our economic leverage, but the average person still has their political leverage. Now, over the long run, yeah, if we didn't do anything for a while, I'm guessing the political system would also change. So then the key is to lock in or turn our current, well, it's not just political leverage, right? We also have property rights. So like we own a lot of stuff that AI wants, factories, sources of data, etc. It's to use the combination of political and economic leverage to lock in benefits for us for the long term, but beyond the lifespan of our economic usefulness. And I'm more optimistic for us than I am for these Middle Eastern countries that started off poor and also with no Democratic representation.
Ryan:
[1:22:40] What do you think the future of like ChatGPT is going to be? Like if we just extrapolate maybe one version update for to ChatGPT 5, do you think the trend line of the scaling law will essentially hold for ChatGPT 5? I mean, another way to ask that question is, do you feel like it'll feel like the difference between maybe a BlackBerry and an iPhone? Or will it feel like more like the difference between, say, the iPhone 10 and the iPhone 11, which is just like incremental progress, not a big breakthrough, not a order of magnitude change? Yeah.
Dwarkesh:
[1:23:12] I think it'll be somewhere in between, but I don't think it'll feel like a humongous breakthrough, even though I think it's in a remarkable pace of change. Because the nature of scaling is that sometimes people talk about it as an exponential process. Exponential usually refers to like it going like this. So having like a sort of J curve aspect to it, where the incremental input is leading to super linear amounts of output, in this case, intelligence and value, where it's actually more like a sideways J. The exponential means the exponential and the scaling laws is that you need exponentially more inputs to get marginal increases in usefulness or loss or intelligence. So and that's what we've been seeing, right? I think you initially see like some cool demo. So as you mentioned, you see some cool computer use demo, which comes at the beginning of this hyper exponential, I'm sorry, of this sort of plateauing curve. And then it's still an incredibly powerful curve and we're still early in it. But the next demo will be just adding on to making this existing capability more reliable, applicable for more skills. The other interesting incentive in this industry is that because of there's so much competition between the labs, you are incentivized to release a capability as soon as it's even marginally viable
Dwarkesh:
[1:24:32] Or marginally cool so you can raise more funding or make more money off of it. You're not incentivized to just like sit on it until you perfected it, which is why I don't expect like tomorrow OpenAI will just come out with like, we've solved continual learning, guys, and we didn't tell you about it. We're working on it for five years. If they had like even an inkling of a solution, they'd want to release it ASAP so they can raise the $600 billion round and then spend more money on compute. So yeah, I do think it'll seem marginal. But again, marginal in the context of seven years to AGI. So zoom out long enough and a crazy amount of progress is happening. Month to month, I think people overhype how significant any one new release is.
Josh:
[1:25:09] So I guess the answer to when we will get AGI very much depends on that scaling trend holding. Your estimate in the book for AGI was 60% chance by 2040. So I'm curious what guess or what idea had the most influence on this estimate? What made you end up on 60% of 2040? Because a lot of timelines are much faster than that.
Dwarkesh:
[1:25:29] It's sort of reasoning about the things they currently still lack, the capabilities they still lack and what stands in the way. And just generally an intuition that things often take longer to happen than you might think. Progress tends to slow down. Also, it's the case that, look, you might've heard the phrase that we keep shifting the goalposts on AI, right? So they can do the things which skeptics were saying they couldn't ever do already. But now they say AI is still a dead end because problem X, Y, Z, which will be solved next year. Now, there's a way in which this is frustrating, but there's another way in which there's some validity to do this because... It is the case that we didn't get to AGI, even though we have passed the Turing test and we have models that are incredibly smart and can reason. So it is accurate to say that, oh, we were wrong. And there is some missing thing that we need to keep identifying about what is still lacking to the path of AGI. Like it does make sense to shift the goalposts. And I think we might discover once continual learning is solved or once extended computer use is solved, that there were other aspects of human intelligence which we take for granted in this Moravax paradox sense, but which are actually quite crucial to making us economically valuable.
Ryan:
[1:26:37] Part of the reason we wanted to do this, Dwarkesh, is because we both are enjoyers of your podcast. It's just fantastic. And you talk to all of those that are on the forefront of AI development, leading it in all sorts of ways. And one of the things I wanted to do with reading your book, and obviously I'm always asking myself when I'm listening to your podcast, is like, what does Dwarkesh think personally? And I feel like I sort of got that insight maybe toward the end of your book, like, you know, in the summary section, where you think like there's a 60% probability of AGI by 2040, which puts you more in the moderate camp, right?
Ryan:
[1:27:10] You're not a conservative, but you're not like an accelerationist. So you're moderate there. And you also said you think more than likely AI will be net beneficial to humanity. So you're more optimist than Doomer. So we've got a moderate optimist. And you also think this, and this is very interesting, there's no going back. So you're somewhat of an AI determinist. And I think the reason you state for not, you're like, there's no going back. It struck me, there's this line in your book. It seems that the universe is structured such that throwing large amounts of compute at the right distribution of data gets you AI. And the secret is out. If the scaling picture is roughly correct, it's hard to imagine AGI not being developed this century, even if some actors hold back or are held back. That to me is an AI determinist position. Do you think that's fair? Moderate with respect to accelerationism, optimistic with respect to its potential, and also determinist, like there's nothing else we can do. We can't go backwards here.
Dwarkesh:
[1:28:05] I'm determinist in the sense that I think if AI is technologically possible, it is inevitable. I think sometimes people are optimistic about this idea that we as a world will sort of collectively decide not to build AI. And I just don't think that's a plausible outcome. The local incentives for any actor to build AI are so high that it will happen. But I'm also an optimist in the sense that, look, I'm not naive. I've listed out all the way, what happened to the Asics and Incas was terrible. And I've explained how that could be similar to what AIs could do to us and what we need to do to avoid that outcome. But I am optimistic in the sense that the world of the future fundamentally will have so much abundance that there's all these, that alone is a prima facie reason to think that there must be some way of cooperating that is mutually beneficial. If we're going to be thousands, millions of times wealthier, is there really no way that humans are better off or can we can find a way for humans to become better off as a result of this transformation? So yeah, I think you've put your finger on it.
Ryan:
[1:29:05] So this scaling book, of course, goes through the history of AI scaling. I think everyone should should pick it up to get the full chronology, but also sort of captures where we are in the midst of this story is like, we're not done yet. And I'm wondering how you feel at this moment of time. So I don't know if we're halfway through, if we're a quarter way through, if we're one tenth of the way through, but we're certainly not finished the path to AI scaling. How do you feel like in this moment in 2025? I mean, is all of this terrifying? Is it exciting? Is it exhilarating? What's the emotion that you feel?
Dwarkesh:
[1:29:43] Maybe I feel a little sort of hurried. I personally feel like there's a lot of things I want to do in the meantime, including what my mission is with the podcast, which is to, and I know it's your mission as well, is to improve the discourse around these topics to not necessarily push for a specific agenda, but make sure that when people are making decisions, they're as well-informed as possible, They have as much strategic awareness and depth of understanding around how AI works, what it could do in the future as possible.
Dwarkesh:
[1:30:17] And, but in many ways, I feel like I still haven't emotionally priced in the future I'm expecting. In this one very basic sense, I think that there's a very good chance that I live beyond 200 years of age. I have not changed anything about my life with regards to that knowledge, right? I'm not like, when I'm picking partners, I'm not like, oh, this is the person, now that I think I'm going to live for 200, you know, like hundreds of years. Rather than yeah yeah well you know ideally i would i would pick a partner that would ideally you pick somebody who would be that would be true regardless but you see what i'm saying right there's like the the fact that i expect my personal life the world around me the lives of the people i care about humanity in general to be so different has it just like doesn't emotionally resonate as much as i my intellectual thoughts and my emotional landscape aren't in the same place. I wonder if it's similar for you guys.
Ryan:
[1:31:14] Yeah, I totally agree. I don't think I've priced that in. Also, there's like non-zero chance that Eliezer Yudkowsky is right, Dworkesh. Do you know? And so that scenario, I just, I can't bring myself to emotionally price in. So I veer towards the optimism side as well. Dworkesh, this has been fantastic. Thank you so much for all you do on the podcast. I have to ask a question for our crypto audience as well, which is, when are you going to do a crypto podcast on Dwarkech?
Dwarkesh:
[1:31:42] I already did. It was with one Sam Bigman Freed.
Ryan:
[1:31:45] Oh my God.
Dwarkesh:
[1:31:47] Oh man.
Ryan:
[1:31:48] We got to get you a new guest. We got to get you someone else to revisit the top.
Dwarkesh:
[1:31:52] Don't look that one up. It's Ben Omen. Don't look that one up. I think in retrospect. You know what? We'll do another one.
Ryan:
[1:31:58] Fantastic.
Dwarkesh:
[1:31:59] I'll ask you guys for some recommendations.
Ryan:
[1:32:01] That'd be great. Dwarkech, thank you so much.
Dwarkesh:
[1:32:02] But I've been following your stuff for a while, for I think many years. So it's great to finally meet. and this was a lot of fun.
Ryan:
[1:32:09] Appreciate it. It was great. Thanks a lot.