Ethereum's Last Big Upgrade: The zkEVM | Ansgar Dietrichs
Ansgar:
[0:00] And ZKVM is this fundamental insight that what you can do is you can basically
Ansgar:
[0:04] allow nodes to verify that a block followed all the rules without having to re-execute the block. It's a very non-intuitive thing, right? A blockchain by its nature is a very symmetrical thing. Every node basically does the same thing. Of course, you have block producers, but then every node kind of has to download, re-execute. You're duplicating the effort across the network. And now you're jumping through this very fancy cryptography. You're jumping into this world where you still have the same effort to build a block, but then verification in a way is effortless. It has this magical compression element to it.
David:
[0:42] Bankless Nation, I'm here with Ansgar Dietrichs. He's a researcher at the Ethereum Foundation. We're going to talk about the ZKEVM today on the show. Ansgar, welcome to Bankless.
Ansgar:
[0:50] Hey, great to be here again.
David:
[0:53] Pretty ambitious subject, Ansgar. Ethereum has had this history of very big forks, hard forks that have upgraded Ethereum from this early primitive proof of concept, where it started in 2015 to what it is today, which is fundamental infrastructure, the backbone of Internet money and Internet finance. We had the merge, which did proof of work to proof of stake. We had EIP-1559 that upgraded Ether economics and transaction user experience. There's also 4844, which just enabled Ethereum's roll-up environment to become its best self. With each of these forks, they all represented this rallying cry for the Ethereum community. They were this kind of grand unifying force of attention by the Ethereum community, and it allowed Ethereum itself to command attention from the rest of the world. The rest of the world paid attention to Ethereum when Ethereum had these forks, these incoming forks. Ethereum was just loud. And I think these kind of represent some of Ethereum's best moments when Ethereum has these kind of cultural shelling points for technological upgrades to what we consider in the Ethereum community to be critical social infrastructure. Now, I think Onsgar, and I want to suss this out, this topic out with you, that there is another fork on the horizon. It's not soon. It's not this year. It's likely not next year either. But nonetheless, it is there on the horizon, and I think it deserves attention.
David:
[2:21] I think it deserves the treatment that the Ethereum community has given previous forks. And I think in addition to all of the valuable things that we got from the three forks that I just mentioned, this one is actually the biggest upgrade that Ethereum will ever experience because it relates to users more than any of the three forks in the past. And that is the fork that introduces the ZKEVM to Ethereum. Now, Osgar, these are the sentiments that I want to start this podcast off with. Before we get into what is the DK EVM and all the technical details about it, I just wanted to give those sentiments to you and have you reflect upon them
David:
[2:58] before we kind of dive into the technicals.
Ansgar:
[3:00] I personally share your excitement on this topic. I really think that it's one of those changes that are really Ethereum at its best. It's one of those really ambitious technical projects that I think Ethereum is in a unique position to deliver. It will have a huge impact primarily to scaling, but in many ways, I'm sure we'll talk about all of this. And I really think it's something we can look forward for, we can be proud of. And yeah, I'm excited to talk about the details. I will say, by the way, you said hard fork. And the interesting thing here is like similar to if you think back at the merge, right? We had first the launch of the beacon chain, which was one moment in time. And then we later on had the mergers, like two separate moments in time. I think similarly, maybe even to a larger degree with ZKVM, as we'll discuss, it actually it has this nature of it's an ongoing transition that that will that is basically about to start then we will have the the main hard fork and then it will continue after so it's it's it's much more like a ongoing transition
Ansgar:
[4:00] but yeah that's let's dive in.
David:
[4:02] So it is the introduction of an era of ethereum rather than an acute hard fork and i think the zk evm era will be has the potential to be ethereum's best era because of what the zk evm does for ethereum so let's let's stop hyping it up and start to get into the technical details. What do we need to know about what a ZKEVM is? What is it? And then we can talk about like why, what it is that's so significant to Ethereum.
Ansgar:
[4:28] Yeah. So I think, you know, to understand this, like really kind of, you have to start from the problem statement, right? So ZKEVM really arose in the context of scaling of, and basically the fundamental point is that a blockchain, if you run a blockchain and you have these three primary constraints. You have the data, right? You have to first, like any new block you create, it has to get to the user. Then you have the IO, you have to like then go to disk, you have to get all the data you need to actually like then, then verify the block. And then you have the actual verification, the execution, the compute, right? So those are like the three main constraints, the bandwidth, the IO and the compute. That's any blockchain, no matter the design, those are the main constraints. And so...
Ansgar:
[5:10] If you want to scale this, you can just do the thing where you take that and you just scale it up. And we'll talk about this in a bit. That's actually, to some degree, what we're doing in the short term. And that's what many other chains have been doing. That's a very natural thing. But you do run into limits. You do run into tight limits. And so ZKVM is this fundamental, like it comes from the cryptography side,
Ansgar:
[5:31] these snags, zero knowledge proofs. And it is this fundamental insight that what you can do is you can basically allow nodes to verify that a block followed all the rules without having to re-execute the block. And that's, again, that's something that's a very non-intuitive thing, right? Normally, a blockchain by its nature is a very symmetrical thing. Every node basically does the same thing. Of course, you have block producers, but then every node kind of has to download, re-execute. You're duplicating the effort across the network. And now you're jumping to this, like, through this very fancy cryptography, you're jumping to this world where you still have the same effort to build a block, but then verification in a way is effortless. It has this magical compression element to it.
Ansgar:
[6:18] And then specifically what's so important in the L1 context is the real-time element to it. So a ZK EVM just allows for this compression. And for example, many listeners, I think, will already be familiar with the concept of ZK rollups, right? So those have been around for a while. And that actually was a huge first jump in this technology, which just allowed for this compressed ZK verification in the first place. But so far, this is done in an asynchronous way. So meaning you have your L2 blockchain that, you know, it's its own chain, basically, and it keeps progressing. And then afterwards with some, you know, up to several hours of delay, you come and you basically you compute over a long time these proofs and then you bring them to the chain. And what now is the second huge jump here is to go from this very asynchronous delayed process to a proving, a verification loop from block creation, proving verification that all happens at the same speed of the blockchain synchronously. So like within a single Ethereum slot right now, that's 12 seconds. We'll bring that even further down. You have this entire loop, closed loop within that short amount of time.
Ansgar:
[7:24] And so basically that's many orders of magnitude of performance improvement. And that really is what unlocks all of these huge gains for the L1.
David:
[7:31] Maybe going back to just like what makes a blockchain a blockchain, Bitcoin had this fundamental insight of the way that we get rid of a leader in a blockchain is that everyone checks the legitimacy, the authenticity, the correctness of everyone else. And so when some Bitcoin miner mines a block, but it finds the correct hash and it proposes that block, everyone else in the network doesn't trust that leader. They re-execute all of the same work to verify it for themselves. And that's the way that Bitcoin discovered the way to have a decentralized network is everyone's checking everyone else. And that re-execute word has just been the status quo for all blockchains. Is everyone redoes all of the work. And the way that that impacts blockchains, all blockchains to this day, is that it kind of is hamstrung by the slowest node in the network. Or at least there is some...
David:
[8:31] Requirement for computation that every blockchain has that you know if you aren't at least this fast you can't keep up with the network because you can't keep up with executing all the everyone else's work and now you know some blockchains have different opinions as to like how much requirement you have bitcoins is very low ethereum has also been a very low requirement because we want to be decentralized you know as you said like you know some chains like solana or other very fast chains have had a higher opinion as to the computational requirements it takes to do the re-execution. But nonetheless, all blockchains to this day are re-executing all of the same work, and it's redundant. It seems unnecessary. It seems like, is there a way where we can not do all of that extra work and still have a blockchain? And parallel to that, as you said, with the Ethereum layer twos, what we understand is that there is a way to not do this. And that is with ZK proofs. So in addition to the technological progress of blockchains as a whole, we can make them more efficient. We can, you know, we can juice some of the throughput.
Ansgar:
[9:36] On a parallel path.
David:
[9:38] There are cryptographic algorithms that instead of allowing or forcing everyone to do the re-execution, you can simply verify a cryptographic hash, a cryptographic proof. And that part is trivial. It's easy to verify. It's hard to produce in the same way a block in a blockchain is hard to produce, but it's trivial to verify the correctness of a cryptographic proof. And that's kind of the trick. That's where we remove the re-execution. A great Elon Musk quote here is, the best part is no part at all. And what a cryptographic proof does is it removes the whole part of re-execution.
David:
[10:20] So blocks in a blockchain get executed once, and then no one has to actually re-execute it. They can just trivially verify it, which allows for a lot of redundant work to get removed from the system and that allows for just work being constrained down to one block producer and then everyone else is just like thumbs up that is correct and we really like take off the brakes off of a of a blockchain system now the reason why bitcoin wasn't built like this in the first place the reason why ethereum wasn't built or any other blockchain wasn't built like this in the first place was you know
David:
[10:53] Technological progress along cryptographic hashes also needed to mature. Maybe you could like take everything that I just said and run with it, but also talk about just like the technological parallel path of cryptographic proofs as they've been progressing alongside blockchains.
Ansgar:
[11:10] Yeah, absolutely. So actually, just to start with where you started with the Bitcoin example, because some listeners might have heard this and might have been like, hey, actually, isn't there this asymmetry as well where a miner does all this like very expensive work, but then not every other node has to like redo the same mining, right? Like indeed in the mining process, there's the same efficiency like asymmetry. And that's actually, it's a very common trick in cryptography where basically like you try with mining, you try all these different hashes, you find one hash that has enough of like leading zeros. That's how the difficulty in Bitcoin worked. And then you can just show people and it's very cheap to verify. So Bitcoin on the consensus mechanism side already uses a similar trick, right? but on the actual content of the block, right? So like what is in a block, in a Bitcoin block, it's all the transactions. Each transaction comes with a signature. So you have to like actually like verify the signatures. You have to say, okay, balance was moved from this account to that account. All of the actual operations of the blockchain, that's the re-execution part, right? So Bitcoin does get, has this like, again, because this is a very typical trick
Ansgar:
[12:10] in cartography that you have this like asymmetry of generation versus execution. It uses that for mining because that's easy to do with proof of work.
Ansgar:
[12:18] It's very very hard to do this for the actual operations within a block and so now this is what basically the main unlock here is that that basically now we're bringing the same efficiencies that people are used to from this like one one miner everyone can verify easily we're bringing that same efficiency to the entire block block in block and of course on bitcoin the actual bitcoin block is very small it's a very simple operations on ethereum because you can run smart smart contracts and you we are massively scaling the throughput it's it's much more complex like the vast majority of the overhead in processing and following the chain is not the consensus part, not the proof of stake part, but it is the actual contents of a block.
Ansgar:
[12:56] So what has changed on cryptography? Actually, my friends from the Xerox PARC team, they are like one of those cryptography research labs. They always talk about, I think they call it, maybe I'm getting this slightly wrong, but they call it the first generation of cryptography and the second generation of cryptography. What was the first generation of cryptography? It was basically handcrafted algorithms for very specific use cases. So a signature algorithm or a hash function or anything that basically fulfills a very specific purpose and you can use it in a very specific context. And those are amazing, right? And that's been the story of cryptography for the last 50 years. It's basically more sophisticated special purpose mechanisms.
Ansgar:
[13:38] And those were already very mature when, say, Bitcoin started. This is why they were able to just take the concept of hash functions off the shelf and you can do amazing things, signature mechanisms, all that kind of stuff. What is very new, it basically started, I don't know, a decade ago or something like this, probably academically a little bit earlier. I'm not actually a cryptography expert myself, so I don't know the exact kind of early story there, but that's basically like cryptography 2.0 in a sense. It's general purpose cryptography. It is basically now the ability to make cryptographic statements about arbitrary computation. Instead of having to handcraft it for a specific use case, you're going to this general purpose world. And this is like a huge leap because it means that instead of like just, say, signing a message, you can prove whatever you want. And anything Turing complete, anything that you, any execution whatsoever, you can now compress, you can make a cryptographic statement over. And that...
Ansgar:
[14:32] That was a giant leap. It was, I think, only really, it was pulled from academic theory to feasibility, I think, through a lot of funding that came from the blockchain space, of course.
Ansgar:
[14:44] And it's really incredible progress. And that progress, I think, I would think of it as several stages. So one was just, not just, one was what we saw with ZK rollups. And then, of course, already prior to that, special purpose chains like Zcash, right, was, Just the ability at all. You have a protocol and you can make a proof of it. You can basically prove that a block of a blockchain is valid. What we've seen since is this progression of the tech stack. So for example, all of these earlier stages like, again, Zcash, early ZK rollups, what they all did is they basically handcrafted the rules
Ansgar:
[15:24] of the chain that they were trying to verify into very low level. It's called circuits. It's basically like, you basically express it in like very low-level constraints that you then make these your knowledge proofs about. And where we've been going from there is now we have this, and you can really, it's really, it parallels the early progression of computers as a whole, right? We went from, you have to specify, you have to manually specify every individual system you.
David:
[15:50] Want to prove. Instruction, yeah.
Ansgar:
[15:51] Yes, as like this set of constraints of circuits. It basically went from there to introducing, and it's such an elegant idea, but it's crazy that it works. Just introducing this intermediate instruction set. So it's called an ISA, instruction set architecture. And you can think of it like how a processor in a computer has instruction sets. So x86, for example, right? Like Intel or ARM or whatnot, right? Basically, it's what instructions does your processor understand? And the way these modern ZK systems are now built is you pick one of those instruction sets, It's like the one that is actually becoming the standard in Ethereum right now is RISC-V. RISC-V is similarly in principle, it's just like a list of operations that your processor could do, right? Like it's often run in a virtualized way. So it's not actually run on real RISC-V hardware. It's mostly run on in a virtualized kind of way, but basically it's just like a list of instructions. And then... You then write zero knowledge provers that can just prove arbitrary RISC-V code. So you're just saying like, look, give me any RISC-V code and I just have this machinery that can make statements, cryptographic statements about it. And what that now unlocks is instead of having to handcraft like the early ZK EVMs, they were literally handcrafted EVMs inside of ZK systems. Now you can just literally compile. You can just take basically, basically, you can take an Ethereum client instead of compiling it to whatever your local machine has as an instruction set, instead of compiling it to x86 or something.
Ansgar:
[17:20] You're now just compiling it to RISC-V, and then you just get the ZK proving for free.
Ansgar:
[17:25] And RISC-V, that's just like a typical kind of endpoint for compilers, right? So basically, you're modularizing the toolchain. And of course, that's only possible now with all the efficiency gains, because of course, you're losing some benefits of handcrafting all the optimizations. But this really, it's a phase change from how feasible it is to do this for just like big complex projects. And so really the way Ethereum does this EKAVM is, again, of course, the real world is a bit more complex, but in principle, you can really think of it. We take the existing Ethereum clients and we just compile them to RISC-V, and then we just have provers that specialize in making proofs over RISC-V. And that's just, it's really amazing how far the industry there has gone to make that feasible. And then the last jump, the last big kind of conceptual jump from there to this is becoming feasible for us is the real-time element. So you arrived at that world and you could do that within an hour. And sometimes if the block is actually convenient to prove, maybe you can get it down to a few minutes, whatever, like that's the world that we used to be in. And then we basically, we have had this massive industry collaboration effort that started like a year and a half ago with Justin Drake really like pushing super hard on this. And these teams, this is really mostly driven by teams outside of the Ethereum Foundation. These teams have done an absolutely amazing job. And I would say the last year was really the year of performance, of real-time performance.
Ansgar:
[18:50] Throughout the last year teams just kept pushing this down orders of magnitude and now we're at the point where you can we are starting to achieve the target zone so like we are actually able to prove consistently reliably prove, A full Ethereum block within five seconds, something like that. And that's basically the promised land. Because now we have all the technological building blocks. And now we can talk about the rollout and all these things. But we have all the like, from the cryptography side, we now finally, for the very first time ever, we have all the elements we need to run a general purpose blockchain at real time proving speeds. And that's something that has
Ansgar:
[19:26] never been possible before.
David:
[19:27] I really like the idea of there has been this, you know, three parallel paths of computing, first starting with computers where they were first narrow and then we were able to make them generalized and then we were able to make them generalized and fast which is where you know modern computers are now to this day and then we created blockchains you know virtualized ledger based computers in the you know in the sky decentralized systems they started narrow with bitcoin and then we learned to generalize them with ethereum and then we learned to generalize them and make them fast with many other other smart contract chains and now we are doing the same thing with cryptography started narrow with cryptography learned to make it generalized and now we are making them generalized and fast and that generalized and fast unlock on the computing tech tree of cryptography is now being able to be taken and bestowed into ethereum which is what we're going to talk about for the rest of this episode so now that we have the zkevm and it's in the Ethereum blockchain and it's up and running. What does that actually change with Ethereum? When we get to this point, how does Ethereum actually change? Right.
Ansgar:
[20:37] So of course, we're not there yet, but that's kind of, that's where we're going. And so why is this useful? So coming back to scaling, right? I said that there's basically these three main elements of scaling. There's the bandwidth, the IO, and then the actual compute. Now.
Ansgar:
[20:51] The amazing thing about real-time ZKVM is that it actually is the core of a broad, like the way I would say it is like it helps us scale all three of these, but not just on its own, but it's basically, it's the unlocking piece that allows, that basically enables a broader transition that addresses all of these elements of scaling. And so that's why when we talk about ZKVM, to me, it's more like the most exciting element of this broader change. And that's why when you said at the top of the podcast, this might be the biggest change ever. I would agree, not just the ZKVM itself, we'll talk in a second about statelessness, about data availability sampling, like all these things come together to unlock this. And so let's take it step by step. So the one of those three constraints, the one immediate impact you get is on the compute side, right? So because that's the nature of ZK proofs, right? You basically you're able with very little compute effort on the verification side to verify arbitrary length, execution so no matter how how much you fill the block now of course we can talk about constraints there's still block building some node somewhere needs to do that so it's not doesn't give you literally infinite throughput but basically right like you you can have whatever like length of computation you have you can compress it down into a constant size proof and then you can verify that with with just very little compute so compute scaling that that's the in a way the easiest one that's the one that you that you get very easily.
Ansgar:
[22:11] Now, you look at the other two and you're saying, okay, how does it impact IO, right? So historically, traditionally, when you execute an Ethereum block, what you do is you start executing, you do some compute, at some point, you want to load some state. Actually, already at the beginning of a transaction, you want to, you know, you need to load your account, you need to load the account of that you're calling into that you're sending ETH to. So you basically, you immediately need to go to disks, right? So you have this intermixing of sometimes you go to disk, you load value, sometimes you do some compute, then you go to disk again, it's like this, this intermixing. One actual change to Ethereum that we're already doing before ZKDM, it's called block-level access list. So it allows us to, it basically, it adds some annotations to a block of like, this is the data you'll need. So actually what happens now is that you actually go to disk at the very beginning. You bring all the data and then you can do the execution. But you still have this element of having to go to disk both before the block and then again after the block to go and like be okay. We have to update all the values and then we have to also compute what is the new state root.
Ansgar:
[23:12] So how does it look with ZKVM? Well, there's a few things that are fundamentally improved by ZKVM. So the important part is that ZKVM basically already takes in as part of the claim. It's like, hey, assuming the blockchain was in this state and I apply these transactions, now then the next state is this. So basically, you no longer need to go and load the data from the values from disk. So you basically, you're saving this IO on the load side naturally. And then the thing that you normally still have to do is you have to like go and still write the updates, right? So if you still have the state of Ethereum, so after you verified the block, you still have to go and say, okay, these values change, right? So you have to go and apply that change. One, that's no longer in the critical path. So you can do that after you've already finished verification. So if you're validator, you can already vote. You can like say, ah, this block was valid. and then afterwards I go and actually apply the updates. So in terms of like, what is the kind of price of this Uniswap pool? Or what's the balance of this account?
Ansgar:
[24:11] I might only go update this on disk after I already know that the block is valid. So that's a natural benefit you get. But if you want to push it further.
Ansgar:
[24:20] We have to, and this is what I was saying, like this is one of those changes that is enabled by ZKVM, but it's its own change. It's a stateless Ethereum or partially stateful Ethereum. So what does that mean? Well, instead of like today, any node in Ethereum network basically has to have the full state. And that's with re-execution, that is unavoidable, right? because if you want to verify a block, you have to go and again load all the data. You have to have it all locally.
Ansgar:
[24:45] Once you have ZKVM, that becomes optional because you don't actually need the data local to double check the validity of the block, right? So what you can do is you can, in principle, what you could do is you could throw away the entire data, right? So you can basically just, you can only keep like this root commitment and you can just always update the root commitment and that's it. In practice, what you'd want is because Ethereum nodes have multiple functions, they also operate the Ethereum mempool, They have to understand validity of transactions in flight, all these kind of things. What you'd want to do is you don't want to run fully stateless. You want to run in what we're calling partial statelessness. So for example, there's this proposal called VOPs, Validity Only Partial Statelessness. So it means you specifically have a subset of the state and that can be defined by several different rules. It can be, say, the balances of all the accounts or it can be, I don't know, if you are specifically interested in some state that belongs to you as the user or something, you can define what state you're interested in. But basically, now you can keep a subset of the Ethereum state, and that's totally safe because of zkvm, right? And you only have to apply the diff, you only have to go to disk, you only have to have the IO overhead of updating that subset.
Ansgar:
[25:52] That's the second, basically. You have ZKVM for compute. Now you have partial statusness for more optimized IO. And also, by the way, for keeping your disk size contained. We'll talk about state growth maybe towards the end, but basically, you know,
Ansgar:
[26:05] so you don't have to have like a huge disk. And then it leaves the third one, which is bandwidth, right? And how do you actually like keep scaling the chain now with the ZK system while actually keeping bandwidth requirements the same or even reducing them? Well that's yet another separate trick that's also again enabled by ZKVM but it's separate and that is.
Ansgar:
[26:28] You no longer actually need to download the full block. And that makes sense, right? Because you get the ZK proof, you have to download the proof and the proof tells you, hey, assuming there is a block with this hash or something, once I apply the block, this is the result. And that's proven. So the only thing you need to know about the block is that it exists. And that's a bit of a nuanced thing. Like why do you even need, I mean, someone clearly must have created it, otherwise they could not have created the ZK proof. So why do you have to verify that it exists? Well, that's for the nuanced reason that you can otherwise withhold the data. Like that's also the same for, that's why, for example, we even have blobs in the first place, actually for L2s is the same story. You have to publish, you have to basically prove that the block was published so anyone can access it and anyone can get access to the transactions that were applied basically. So, but what you can do is, I mean, that's again where like the synergy with the L2s, it's just a beautiful story. We've already built out specialized functionality for verifying the existence of data very efficiently without downloading it all. It's called data availability. It's called blobs, right? So what we will do is we'll take the Ethereum blocks and we'll just basically become our own roll-up in a sense. We're putting the data into the blobs. It's called block and blobs, BIP.
Ansgar:
[27:39] And with that, now all an Ethereum node has to do is just sample. Sample the data and we'll be in the progress of making that more and more efficient because we want to provide more and more data for our L2 partners. And that now naturally also benefits ourselves because now you can have more and more like bigger and bigger blocks while keeping the footprint in terms of bandwidth also very constrained. So now you're right, coming back, we have ZKVM and we have partial statelessness and we have block and blob, state availability sampling. Together, they scale bandwidth, they scale IO and they scale compute. And that is how you basically like use all of these elements to scale the blockchain. And then there's some nuances, You don't get everything for free. You have state growth. We can talk about state growth that we have to separately address. And you have things like being able to efficiently sync an Ethereum client. There are things like being able to efficiently run an RPC node,
Ansgar:
[28:29] you know, like what Infura is doing, these kind of things. So there's more to scaling than this. But the core story is that you have these three constraints and ZKVM directly and indirectly addresses all three.
David:
[28:39] You zoomed in on each one of those three. And as you just said, you put those three together. That's how a blockchain becomes a blockchain and we improve all three of those things. I want to zoom out and really focus at that level of advantage. When we reconstruct how a blockchain becomes a blockchain on all three comprehensively, you really kind of said it when you said Ethereum uses its own data availability to be a ZK rollup. As I understand it, the ZKEVM, when it is up and running and operational and fully fleshed out and forked into Ethereum, the Ethereum layer one has the performance of a blockchain that would like be a ZK rollup. In fact, it maybe even is a ZK rollup. It just also is the layer one itself. And so we get all the performance benefits of rollups. We get all we get to zk everything which uh unlocks the brakes undoes takes off the brakes on the ethereum layer one and we already have the infrastructure needed with the data availability sampling for this to get done and so from a performance perspective the ethereum layer one which is known to be a slow antiquated you know expensive blockchain to do computation on upgrades itself to have the performance properties of a ZK roll-up. Is that a true statement that I just said?
Ansgar:
[30:05] Yeah, I think that's right. And I think, like, just... I think it's important to understand like why even does Ethereum, like why is Ethereum so slow, right? Like if we ask that provocative question, the one really important element is that core to Ethereum's design philosophy is this guarantee that Ethereum never wants to compromise on, which is like easy verifiability and auditability. So the world that Ethereum always wants to be in is that anyone, that any user of Ethereum can easily, if they want to, verify or audit that the protocol is following the rules. And why this is so important, people are always like, well, but in practice, many users don't do it. And other chains, yes. For example, if you're trying to join one of those high-performance chains, it's actually, it's really, really hard to run a full node for one of those chains that scale just by increasing hardware requirements rapidly. Because not only do you need a heavy machine, but often you're not even allowed to join the peer-to-peer network because it's so performance sensitive that they have to like have white lists for who even is allowed, which nodes are even allowed into the network because otherwise they are just too brittle, right? And they just immediately collapse. So basically, and why does it matter? Because I think people think about proof of stake always in this like, well, there's validators and they vote on what's the current state of the chain.
Ansgar:
[31:26] In Ethereum, validators get basically like get handed the current rules of the chain by the community, right? Like any hard fork is basically a social decision of, hey, it's a social governance act. The Ethereum community decides that now there are new rules to the chain. And the validators only vote on like, okay, given those rules, like which blocks did I see, which blocks follow the... It's a very... There's no individual decision that didn't attest to any theorem that makes, right? They just watch the chain and they just attest to what they see. In other proof of stake chains, while in principle that should be the same thing. What in practice happens is that because any non-validated user of the chain is just a light client because you can't just participate in the chain.
Ansgar:
[32:10] Basically, any user in those chains just trusts the majority of validators. So in practice, those validators determine what the rules of the chain are, right? Like in a chain that does not center verifiability, validators de facto control what the rules of the chain are. Like if the majority of validators want to run a different set of rules, they can do that. In Ethereum, that's not the case. Validators can't accept or reject a fork. They can just make a fork of their own. They just get handed the rules by the community and the ultimate power always lies with the community, right? So that's why verifiability, auditability is so core to Ethereum. And that's why we have been historically slow to embrace scaling because that would endanger that property. And now with ZKVM, we have this magical way of.
Ansgar:
[32:53] Of getting the best of both worlds, getting the full verifiability and the full performance. Although I will say all of this is a bit too black and white. Actually, what's been happening, so for example, I'm not actually, like I'm personally, while I'm involved with our ZKVM work, we have experts, we have Justin, who's been on the podcast before many times. We have Kev, who's doing absolutely amazing work there. We have many people there that full-time work on this. And I'm actually focused much more on short-term scaling. And so while it is true that with traditional scaling, there's like a limit that you can reach and otherwise you basically have this fundamental trade if you can't escape. Ethereum historically has been very much in this mode of, well, we're working towards this eventual end state, you know, and we know we want to eventually do ZK, so, you know, we'll focus on that. And as of like, say, a year, year and a half, two years ago, I think that the mindset on Ethereum has shifted a lot towards saying, look, we're now in this moment in time. Real world adoption is here, right? Like it's no longer this future thing that we're building towards. So we have to like now, and it's actually, it's really, it's a very like non-trivial thing. We have to find the right balance between still working on these like Manhattan projectile type jumps, like real time ZKVM. I really, I think, like you said, I think it's the biggest thing Ethereum probably will ever have done.
Ansgar:
[34:14] But we can't just wait for another three years for this to arrive. We have to do things now. And so this is why I think we're now scaling is this perfect example. We have this really good hybrid approach. We started last summer. We're saying ZKVM is three years out. And we will in a second, I think, talk about more the sequencing of the exact role. But we don't want to wait three more years. This is what the old Ethereum would have done. What we're actually doing is we came up with this scaling plan. And it's a very continuous, smooth function. So our goal is basically, we have this rule of thumb, we're saying our goal is 3x scaling every single year. So we are increasing the throughput of the Ethereum blockchain by roughly 3x every year. This is more of like a goal, an ambitious statement. It's not clear that every single year we'll be able to hit that, but we think we see a path at least. It's a possible outcome.
Ansgar:
[35:04] And in practice, the first three years of that scaling with traditional means, and then from that point on, basically, we have the smooth handover into the ZKVM paradigm. So it's not all just black and white and Ethereum is only doing ZKVM. But actually now I think we have the best of both worlds now. We have like the next two, three years, we are doing this ZKVM in parallel.
Ansgar:
[35:27] But we're still doing the traditional scaling. And then we jump into the ZKVM paradigm. And so that means if you're a builder and you're considering building on Ethereum 1, you have this like, instead of having to like exactly think, okay, one is this hard fork and what is the exact? No, you can just say three X every year. You look at the throughput today. You can, and you can just like very simply calculate like, you know, what throughput needs do I have? Is the one a good fit or not? It's a very simple story, but under the hood, it has this like, these like two synergistic elements to it. Sorry, that was a long answer there.
David:
[35:55] Yeah. The idea is that we're pressing the gas on scaling on multiple fronts, not waiting for the Manhattan project of the ZKEVM, which the ZKEVM has been in the Ethereum roadmap since genesis, I think. Like we've understood theoretically of the possibility of turning the EVM into a ZK algorithm. And, you know, we understood that theoretically back in 2015. Now we're in 2026. And like, oh, no, this is now, you know, just an engineering challenge. And we're like in the last mile of this. And like, it's basically almost here. And in the meantime, we are scaling on the more traditional front as well. I want to get into the qualitative nature of the scale of the ZK EVM. So with block times and block sizes those are the two ways that you have throughput you have how big is your block and how frequently do those come you know you know height times height times length so can we talk about what that the nature of scaling with a zk evm does does it help lower block times does it just increase block size i want onsgar both fast and big blocks i like my blocks, big and fast. It would be great if we could increase the size of blocks, but there is also a very important element of just like block times is critically important for trading and finance. So how does the ZKEVM impact both of these variables?
Ansgar:
[37:19] Right. So to answer that question directly, ZKEVM, indeed, it's not a panacea. It specifically addresses the throughput level. So it gives us much, much, much bigger blocks in the same kind of time constraints. It's even... To be fully transparent, it is a small extra strain on the timing, just because you have one extra step, right?
Ansgar:
[37:37] You have to have this proving step that's in between block creation and block verification. You have to have proving, but that's a minor constraint. But it in itself does not give us a lower latency. And this is why when you said the top, like it's the biggest ever change, I was actually tempted to say, well, to me, that's true on the execution side of the blockchain, right? Like same as with Bitcoin, how we said there's the consensus mechanism, proof of work in that case, in our case, proof of stake. And then there's the actual processing of the blocks, Bitcoin transaction, Ethereum transactions, that kind of thing. For the actual execution, for the transaction bits, the ZKE VM and the related changes really are the major story for the next five years.
Ansgar:
[38:16] We, in parallel, are also now putting together this really, really exciting roadmap on the consensus layer side. And the latency, that's all a consensus layer story, right? Because that's where basically the heartbeat of the blockchain is determined. And so we have this separate process. And you should probably, you know, this is maybe setting us up for a separate podcast episode. You should bring someone on that's specifically focusing on that type of work at the airport and or the broad ecosystem. Them because i think we have we have this really exciting roadmap there that's that's getting us to a much faster finality so right now finality in ethereum takes two epochs that's 64 slots.
Ansgar:
[38:55] On average two and a half epochs actually even so it's like long long amount of time we're bringing this down all the way to basically single slot finality two slot finality like it's it's going to come down like orders of magnitude so that's that's super exciting and then even within the single slot instead of 12 seconds we have a story there that's going to gradually get us down from 12 seconds to i don't know eight six four much much much faster and then there's separate work streams around can you get even faster inclusion guarantees right like so that's the heartbeat at which the chain actually progresses and you get get guarantees about that's the result of of your transaction but can you maybe get in principle like speed of light you know like how just round trip time confirmation that your transaction will be included, right? Like ideally I want to click a button and before I can even like, you know, within the hundred milliseconds it even takes me to realize something happened. Boom. I have the confirmation, like my trade will be included. And then within like say four seconds, I know at which price, right? Like I think that's the world we ideally want to be in. And we have a really, really exciting roadmap there as well, but it is a separate roadmap from ZKEVM.
David:
[39:59] Okay, understood, understood. So the ZKEVM massively increases block sizes. I don't know if you can put numbers around that. And then it adds a marginal increase in block times. Can that block speed come down in the future? Or what does it take for block times to get faster? And is that something that we are aspiring to in the roadmap?
Ansgar:
[40:18] Yeah, that's what I was just talking about. Like, we are aspiring to that. That's not just aspiring. that seems so indeterminate optimism, we actually have a plan that will come down. It will come down as early as towards the end of this year. That's not quite certain yet, but basically we're starting to make this a priority as well and it will rapidly then become a major priority.
David:
[40:37] So maybe the part that I wasn't sure of is like maybe the block speeds don't necessarily come down, but transaction assurances come down very, very fast and you're kind of saying, well, that's what people want anyways.
Ansgar:
[40:49] Is that correct? Well, it's basically, you have three things. You have the time to inclusion confirmation, you have the actual time to the next block, and you have the time to finality. All three of these will come down. The heartbeat of the chain, the time to next block, will actually be the one that's only going to come down maybe by a factor of three, something like that, from 12 seconds maybe to four seconds eventually. Maybe we can go lower, but I wouldn't necessarily want to promise this. I think the other two are actually the more exciting ones. Finality will come down massively, and time to inclusion, that's a bit more of an exploratory process still, but that also will come down massively. So I think basically like, yeah, but block times as well will come down. But none of this will be through ZKVM, although, of course, it will be part of an integrated system. Right.
David:
[41:28] Okay. Understood. Okay. So you're saying there's a variety of ways in which Ethereum speeds up broadly. And then there's like zooming into what speeding up means, you know, has nuances, which you just went into. And at least when it comes from a user experience perspective, we have ways of providing essentially instant speeds from the perspective of a user. or Let's talk about the rollout plan for the ZKEVM. We are in phase of Ethereum where there is no ZKEVM. In the future, we will be a phase of Ethereum where it is all ZKEVM, but it is not an acute moment, as I understand it.
David:
[42:05] How do we go from A to B? What does that roadmap look like?
Ansgar:
[42:08] Of course because this is like a multi-year process it's it's as typical there's like very concrete steps as say for the next 12 months and then as you go further and further into the future i can more point out that's the current plan these are maybe the open question these are the directions right so that's how that these things always work the interesting thing as i said top of a podcast is that it's not just a one-time hard fork there will be a one-time hard fork and And that is about the eventual switch from what will come first, which is optional ZKEVMs for those nodes in the Ethereum network that want to consume proofs instead of re-executing. Then at some point, there will be this moment in time where we say, okay, now Ethereum just runs on proofs. Of course, you can still run a node optionally in re-execution mode if you want to. But by default, the network now guarantees that there will always be proofs, basically. And then from this point where where this where the switch to mandatory proofs is is when you really get the scanning gains because before then you're basically not yet mandating that anyone right so like you're still allowed to run a full re-execution node and you're allowed to be slow and.
David:
[43:14] The network will hear
Ansgar:
[43:15] You exactly and after that it's like okay if you want to be a re-execution node that's a special purpose role now that requires special purpose hardware of course that's internally it is a big project like how how do we and make sure that if we run at much faster speeds that you can still run an RPC node in a performant way, right? So like this is a separate work stream that we're working on, but in terms of like the typical, both typical validator even, and the typical full node out there, that's not even a validator, those people basically by default will all at that point then switch over to ZK.
Ansgar:
[43:44] Again, as I was saying, before then is this phase of optional proofs. So that has not started yet. Like right now we're in the proof of concept phase. So like I think Justin presented that in Buenos Aires, there's this proof concept of, hey, see my validator canon principle already run on ZK. But that's not yet like if you're a validator, you can't use this yet today, right? But the idea is that very soon, so meaning within say the next 12 months or so, we are starting to to put this out there in a early production ready state where the idea is that we we will of course we will get very quick guidance of like these are this is the specific nuanced level of confidence we have yet in the these the security of the system all these kind of things right and for example at that point we could not yet have the majority of the network run on this yet right because like if there is some bug with it or something right you very much still want to have the backbone of all the major validators run run on this but if you are just a full node just for hobby purposes, or maybe you're a validator on a very weak machine. You might be tempted to just at that point transition over. So that will be the first step.
Ansgar:
[44:49] And then one thing we haven't really touched on yet is that like, well, I guess a little bit is that there's actually quite a few technical requirements that we need to hit before we can move the bulk of validators over. And I can briefly go over those. So one we already touched on, for example, is the block in blobs, which will come at some point where we basically say, look, we now put the block into the data. So there's also the sampling aspect to it. If you are a re-execution node, you still download all of it. But if you now are a zk node you can start only sampling it right but this will come after the initial optional proofs roll out so before then a validator basically has to download the proof but also has to download the full block still that's so it means they don't yet gain any bandwidth benefits they only get the the io and the the the compute benefits so so basically like we have the the block and blobs that will have to come we have to have in general networking improvements that are in the works. We have repricings, meaning we have to actually make sure that the parts of the Ethereum chain that are especially hard to ZK verify, we make a bit more expensive. We basically rebalance the cost.
Ansgar:
[45:54] And then the most important technical dependency for the mandatory proofs, the full transition basically, is actually it's related to the statusness element. And that's specifically that we need to transition the Ethereum state tree over to a new format. Like long-term listeners might be familiar with this like elusive Verkle tree idea, right? And so Verkle trees were this early Ethereum idea of like, hey, we can, we currently have a Merkle tree. So like any account in Ethereum is part of this huge tree structure and every block, the entire tree is updated. And, you know, at the roots, you have your balance and you have your, you know, all these individual elements about your account.
Ansgar:
[46:35] The original idea was that transition is over to a more efficient form, and it's called vertical trees. And that was the unfortunate fate that vertical trees had is that they were just never really necessary. They were always like one of those nice to have features. Back then, back then, we were not quite sure, like how aggressive do we want to scale the chain? How quickly will state growth become a problem? There were some worlds in which it would have been a more urgent topic, but because we never went down those routes, it was always like right beyond the edge of urgent enough to ever do. So we never ended up shipping vertical trees.
Ansgar:
[47:04] But the nice thing is we now already have a lot of prior work. And now we can actually go directly to the next generation of cryptographic structures here. And so instead of a vertical tree, we're going to something that's basically called a unified binary tree. It's somewhat similar. The main difference is that it has a very different kind of like, instead of like a vertical tree is a very wide tree, a binary tree is a very narrow tree.
Ansgar:
[47:28] And the main, I guess, simple set, the main difference is that the binary tree, uses a post-quantum secure hash function that is also very efficient to prove. So it's already basically fitting into this future world that Ethereum is going to, whereas the vertical trees were basically the standalone piece that doesn't quite fit. But the nice thing is we have a lot of prior expertise. We have Guillaume, who has been the champion of vertical trees, and he's been frustrated to no end that we never ended up shipping it. And now his time has come. So like he's been very excited. He's now working towards this binary tree upgrade behind the scenes already and he's doing an amazing job there with his team. And so actually over the next two years, I would say the biggest kind of individual story that we'll have in Ethereum will be this upgrade to binary trees. So that will probably over the coming months start to become a bigger and bigger topic. People will start hearing about it and that will then enable very efficient stateless operations or partially stateless operations for nodes. So to recap, basically, starting a year or so from now, we will roll out optional proofs. Those optional proofs will initially only be immediately effective for compressing computation and helping somewhat with IO load, but you still have to run in stateful mode. And then we will, bit by bit, start bringing these pieces into the protocol that unlock the full potential of ZKEVM and in parallel keep hardening the ZKEVM security properties. so that.
Ansgar:
[48:55] By the time we are running out of conventional scaling means, which is, that's why all of this is so beautiful. Like we basically have exactly like three years of scaling or like two and a half more years of scaling ahead of us of traditional scaling. And at that point, we will be ready to just seamlessly move over to ZKBMs. So one year from now, optional proofs, two and a half years from now-ish, plus minus, this full transition to mandatory proofs. And then we'll have all the pieces ready to then immediately keep scaling based on ZKBMs after that. So that's the full-out, Right. So as I understand it.
David:
[49:28] The way that it happens is that in a year, we will introduce optional proofs. The Ethereum enthusiasts of the world who just love Ethereum, tinker with Ethereum, run nodes for Ethereum out of just pure passion, will start to do these optional ZK proofs. They will be the pioneers of the transition of Ethereum to be a classical blockchain transitioning into a ZK blockchain.
David:
[49:52] And that will give Ethereum researchers like you, the EF, a lot of data of what it looks like to be in production because of these enthusiasts that are running this optionally because they just love Ethereum so much. That will give you guys the information you need to do the prerequisite upgrades that are needed to actually get a full mandatory ZK EVM fork. And as you alluded to it will also give us just insight into you know in production use of the zkevm maybe there are bugs if there are bugs we need to find them before we make them mandatory and so you know all the different clients will have their own version of the zkevm and we'll be stress testing all of those by using them into production basically there's a whole era of demo ethereum zkevm and that that will take i think you said you know somewhere two to three years as we run out of classical scaling that will have we will have the hardened data and the information we will do the prerequisite work to unlock mandatory the zk evm around two and a half three years from now the mandatory zk evm hard fork will happen and then ethereum will make the transition to this is now a zk evm blockchain the story doesn't end there though what happens after the mandatory zk evm fork how does the story continue beyond that point and
Ansgar:
[51:09] Just by the way, to clarify a little bit for people that maybe think, oh, we are now gung-ho starting to release optional proofs for anyone who wants to be like a experimental, you know, like guinea pig here. I think when we are ready to start releasing this, like there will be very explicit guidance around like what... What is this for? What kind of production-grade readiness does this have for which use cases? I think it's more, you can imagine more like, it's about how many nines after the comma, right? Ethereum mainnet must never go down, right? We have 100% uptime and we're not willing to risk this, so we're basically willing to take extra precaution there. But importantly, if you're, for example, at some point running a ZK validator and you actually dais a bug or something, right? The worst that happens, no one will get slashed, right? What happens is just you're briefly kicked off the chain and then you're automatically flipping over back to normal re-execution mode. And then worst case, if we're already in this partial status world, you might have to first re-sync some of the state, right?
Ansgar:
[52:08] So worst case, you're offline for a couple hours and then you're back online, back on the chain. So none of this, we do it very responsibly just because, you know, just to clarify this.
Ansgar:
[52:17] But yeah, so, and basically I think the way that these, again, absolutely amazing ecosystem ZK teams are talking about this. I think last year was all about, I would say it was the year of performance, getting to real-time ZKVM. This year is the year of security, getting to like absolutely hardened. There's also like this bit of security measure, right? Like getting to a level where we are very confident in the security level. Then next year, I think will be the year of productionizing the ZKVMs. And then the year after will be the year of like transition to mandatory. So like that's basically the performance, security, production, and then like full transition. That's how I think about it. It's like one year at a time. In terms of what comes after the transition, well, it's just, I think, and that's why I was saying earlier, like with the further you go out, the more unknown unknowns there are. It's just about saying at that point, we will have all of the ingredients. Like, you know, we have the partial statelessness, we have the block and blobs, and we have the ZKVM to take advantage for scaling. But we don't expect that once we get closer, that it's like a one-time switch and now we can run it a thousand times faster.
Ansgar:
[53:21] Instead, we basically like right now, conservatively, quote unquote, are projecting this three times per year because we expect that there will be individual remaining challenges we have to address, right? Maybe we have to restructure the way nodes sync, or maybe you have to restructure the way RPC nodes again operate so you're confident that the chain is still usable at higher rates, right? So this is just expressing that while we have the main architectural ingredients,
Ansgar:
[53:45] there will still be a lot of detailed work. And so we expect instead of making use of it all at once, it's going to be this continuous process. And again, the nice thing about this rough 3x number is if you just say, look, every two years you get a rough 10x, 9x, 10x. So basically, we're thinking we have like a path for maybe five or six years of this. So six years at 10x every two years means 1,000x. So basically the first three years of that we get traditionally, then the next three years, so ZKADM, so in six years, roughly a thousand X of where we started last year. That's I think the, again, is this guaranteed yet? No, we don't yet. We don't yet have, we just, we think we see a path. We think we see a path. That's our goal. And then, of course, beyond that, if you want to be more in sci-fi world, now you can think about native rollups. So maybe the way we then keep scaling beyond that is not through just the single chain. Maybe then we're back to this kind of sharding type setup of multiple chains synchronously composed. We'll have to see. But that's the plan.
David:
[54:47] Ansgar, as I understand it, client diversity is a big topic here. Why is client diversity relevant to the ZKEVM and how does the ZKEVM impact it?
Ansgar:
[54:56] So, I mean, of course, I think people will be familiar why client diversity is so core to Ethereum and to Ethereum's kind of 100% uptime, right? Like there's the redundancy factor you get from client diversity. And so the the reason why this is relevant is just that like the nature of clients the nature of client diversity changes in this world and that is because again if we think back to how i explained how there's like this basically most most likely risk five kind of intermediate target for zk and then you basically you just run a of course heavily modified but basically like a traditional execution layer client that gets compiled to risk five and then you take one of those new ZK proving systems that then take the RISC-V code and prove execution over it, right? So what that means is now basically the Ethereum execution layer nodes live inside of the ZK proofs, right? Which is, of course, conceptually very different from what that used to be before. And so what it means is that now the actual node architecture is actually quite interesting.
Ansgar:
[55:53] You basically run, and that is a little bit still TBD. It might be that you're still running this explicit split of two clients, like the consensus layer client and the execution client, but the execution client's role is very different now it basically just verifies the proofs the one that you run locally right that just verifies the proofs and does some maybe like mempool networking that kind of stuff state management but inside of the proof lives the the this this this the ck program that was also derived from an execution layer client so if you think about the roles of clients now basically it means that the the the main question is like what about the diversity within those proofs, right?
Ansgar:
[56:31] Because the outer system we are familiar with, but what about the diversity within those proofs? And so the nice thing is that in principle, you kind of, you get a very.
Ansgar:
[56:41] Comparable very parallel type of mapping where you can just you know you don't just take a single execution client and and compile it into risk five you take multiple so you know you basically you take kind of you know the existing ones you know or like also there's a few ones that will be specially written for that use case you compile all of those and then to to make sure that the redundancy is full stack not just the first half of the stack you also have multiple of these proving systems that take risk five because of course they could also be back in that part of the stack right that take the RISC-5 and prove over it. So you say you have, as an example, five of each. You have five execution layers that can be compiled into this RISC-5, and then you have five different proving systems. And what you can do is you can basically build pairs of those. So you can say, and Justin has this really nice idea, where you can in principle even say performance match them. So maybe the fastest execution client is paired with the slowest proving system, so the pairs kind of balance each other out. But that's just an idea. But basically the point is you then have these combinations of like, okay, this executional client with this proving system. And then in the end, you basically, you're again in this example of five, you'd be in a world again where you have like five different types of proofs that could all, and they're all kind of redundant. They all have different, you know, they're full stack different from each other.
Ansgar:
[57:57] The generally novel thing here is that today you run one execution client, right? Like there's multiple, of course, and there's multiple consensus data client, but you choose one of each.
Ansgar:
[58:07] In this new world, what you can do is you can just verify multiple proofs. So for example, there's this idea, and again, just to use example numbers, but they seem roughly ballpark, right? You could have a system where you say, I only accept a block if I saw at least three different valid proofs for it. So I know that there are these five different ones, and I have to have seen at least three of them. Otherwise, I don't accept the proof. So I accept the block. And so that actually gives you better redundancy because it's kind of almost as if every Ethereum node today would run three different client setups and would basically only accept blocks if they all agree, which of course gives you much better properties than right now we only have the redundancy across nodes, not within a node. So it's actually, it's a better story, but it's also one where you actually have to be intentional so that you don't accidentally collapse any layer of the stack. And as this is a side note, there is this experimental idea. And of course, in the age of AI, all the timelines collapse. So who knows, you know, like maybe that's actually even short-term viable, but this experimental idea of a fully formally verified client. And you could imagine, right, like an EVM implementation in RISC-V that is fully formally verified to be correct. In that world, that could basically then, you would no longer need redundancy at that layer of the stack. But again, this is, as I said, like the further out items have some uncertainty. This is like one of those theoretical out there approaches, but that of course would be, would be also really nice to have. And I think formal verification in the age of AI will become a much bigger deal anyway. So this might be a really nice synergy.
David:
[59:31] As I understand it, the clients are where all of the risk is with the ZKEVM and where we have to be, have like an extreme level of caution with the transition from a classical blockchain to a ZK blockchain. And like if something is going to go wrong, it's going to go wrong at the client level. I mean, I suppose that's always where it would go wrong. But when we, you know, we have, you know, Ethereum has over a decade of uptime because of client diversity, because of how hardened these clients are. And we are kind of resetting that to kind of go back to, you know, zero Lindy with the ZK EVM. You know, we have some properties that will be carried over, but nonetheless, there will, it's, it's risky in the sense that like we have all this great hardened infrastructure and we're kind of rebuilding it to be ZK. Okay. And so we have to have this like extra levels of redundancy, as you said, like three proofs, three correct proofs to make sure that, you know, not just two proofs because two proofs might have the same bug. So we might prove the same bug twice. So three things like this. And so, you know, what's your level of fear about this part of the transition for Ethereum from the classical blockchain, which is so hard and 100% uptime to go where we go here? Like how scary is this?
Ansgar:
[1:00:49] Oh, it's a really good question, right? Because I think the promise here is so huge that we're all very, very excited about this, but it is also generally like a huge, a very, very big challenge. And this is why I think it's not at all natural that we are even doing this two-step rollout with the optional proofs and then the mandatory proofs. In principle, we could switch over at the end of this year, right? And we already plan with this extra 18 months period specifically because of that level of certainty that we want and that we project like that will just take some more time. Again, it also gives us the extra time to roll out these other dependencies to really make use of CK proof. So it's actually quite synergistic, but still, right? Like this extra 18 month delay is specifically for that reason. And.
Ansgar:
[1:01:29] To be clear, we would always be responsible with this. So if it turns out 18 months are not enough, of course, we would delay this full transition to mandatory proofs. Maybe we even find some more gains we can get on the classical scaling side until then, right? So maybe it wouldn't even matter. But basically, we would always wait until we're really, really confident. And it's not in principle harder, but it's just, as you said, right? Like it's a bit of a reset. So like a lot of, say, our internal expertise, both inside of the EF and across the client teams around security work, testing work, A lot of this is currently actively being restructured for this very new domain, for this very new type of operations with ZK, understanding like what even are the weak points here. Like also say on the cryptography side, like how we have absolutely world-class cryptographers inside of the Ethereum Foundation and in the ecosystem. And they are like very thoroughly like really turning around every single stone here in this overall stack and really making us understand like what are the critical points here. And again, like how far are we from being willing to actually trust this? And it's actually, so for example, just to take a related example, I'm not sure if you already had maybe an episode on post-quantum, but that's also a big topic on Ethereum. We will soon, yeah. Yes, mostly unrelated, but of course there's synergies here. And it has a similar nature where I talked about the binary trees and part of the binary trees is this choice of hash function that you need in the tree.
Ansgar:
[1:02:52] And there, for example, we also can't be, not blocked, but the longest piece of the timeline there is us talking with our cryptographers. We have a family of candidate hash functions.
Ansgar:
[1:03:05] But getting to this point where we're saying, look, they are actually robust enough, they have been around long enough that we actually trust that they are secure, right? Like, especially something like cash funds that's so fiddly, you can't really prove security. You're just, it's basically like a lindiness to it, right? Like, how long has it been around? How many people have tried to find vulnerabilities? Has there been anything found, right? This kind of thing. And so some of these things you just can't accelerate, right? Like, how many years of academics having looked into this has there been, right? That's just like a hard constraint. And so both in this post-quantum, but also then binary trees we're also using for making use of ZKEVMs. It's not directly ZKEVMs, but making use of them. There's just some elements of the timeline there that are dictated by the security needs that we have. And we just can't cut corners. So it's a big concern, but I think we are very responsible about it.
David:
[1:03:51] Yeah, yeah, yeah. Which is why it's taking not a short amount of time. So just to maybe conclude this podcast, the timeline, it is now at the start of 2026. By the time we hit 2030 is a good guess for when we think we will have the full
Ansgar:
[1:04:10] The full properties of the ZKEVM. You're nodding your head. Does that sound right? That sounds right. And I think we will be still probably in the process of making full use of it for scaling. So we will be, hopefully 2030 will be another 3X year, maybe more than 3X because we have AI and the hard fork timelines are compressing, but basically another 3X year in 2031 will look like another 3X year. So we will be on this continuous scaling path, but already squarely in the ZKEVM backed side of that scaling path.
David:
[1:04:36] Right, right. I guess one point you made earlier, and I guess it's worth reemphasizing here, is the aspiration of Ethereum is to do a 3x scaling increase every single year, not just for the next three years, the next three years for classical scaling, and then the next three years after that for ZKEVM scaling. So, you know, while I am excited about the ZKEVM and I think it's incredible and why I want to rally the Ethereum community around it, acutely there won't be a ZKEVM moment as felt by the transactors, users of Ethereum, because we are doing a 3x scaling per year.
David:
[1:05:14] For the next six years, first with classical, then with ZK. And so, you know, while the merge, you know, acutely transitioned us from proof of work to proof of stake, EIP-1559, acutely transitioned us from, you know, to have the burn and better transaction UX. And like, same thing with 4844 is an acute transition. This won't be that because we are scaling anyways. But nonetheless, I think it is important to know that like only Ethereum will actually be able to access, you know, then the final, you know, years three through six of scaling in that capacity, because this, this is Ethereum's Manhattan project. Like we said, only Ethereum has been working on this. It's been working on this since Genesis. And while Ethereum makes this transition from a classical blockchain to a ZK blockchain, it will be leaving every other blockchain behind in the, in the previous classical era. And so maybe that's why I'm so excited about it is like, Ethereum is making the generational leap to the next gen blockchain and no other blockchain will have these properties that we've been discussing about on this podcast.
Ansgar:
[1:06:20] Well, and I think this is what I said earlier. It's not just that, it's not an accident that you won't notice this transition. It's actually by design. I think in this moment in time, we're really trying to balance this, continue the strength of Ethereum of being able to make these leaps, these paradigm jumps that But I think other projects really struggle to be able to follow. I think, again, that's why we'll also just naturally have the post-quantum properties. I think many chains will actually struggle quite a bit with actually getting there. And at the same time, realize that now we're no longer in the sandbox mode. We can't just say, just wait, just wait for three more years. Don't be so impatient. No, no, no. I mean, people are coming unchained. Agents, AIs are coming unchained today, right? So I think it's important that we basically just like.
Ansgar:
[1:07:10] We are a continuously scaling blockchain and it's our responsibility to under the hood make that happen and like use whatever like both traditional and magical future ZK means necessary to make that happen. I will say because you said like no one else will be able to do that. I think I actually think it's one of those areas where there's again natural synergy between Ethereum and like the EVM L2 ecosystem. I think one thing that, for example, we didn't talk about at all, but that I'm very excited about is that, again, similar to how the initial jump to non-real-time ZKVM came mostly driven by the L2s, I think now that we are driving from the L1 side this move to real-time ZKVMs, the L2s will also be huge beneficiaries of this because they will also just gain the ability of real-time settlement. So that means also say all the like bridging pain across the L2 ecosystem, right? Like, oh, in principle, it takes either I use a mint and burn bridge or it takes like seven days for my asset to move across chains. All of this will disappear, right? It's going to be a few seconds for any asset to move from any L2 to any other, any real-time ZK EVM proven L2 to any other real-time ZK EVM proven L2 through the Ethereum L1 or of course into or out of the Ethereum L1. So I think it's yet one of these cases where.
Ansgar:
[1:08:26] The fact that if you're part of the Ethereum family, we're like, this is kind of, this is the ecosystem that really has this principled approach to things. You get all of these benefits for free. You're basically, you are on the principled architectural path. And I think that has always been our competitive advantage. And I think while doubling down on the competitive advantage, I think we really are already trying very hard. I think we have to keep trying even harder to close where maybe we've had the competitive disadvantage, which is, I think that Ethereum in the past has sometimes been a bit more, too much in this pure research mode and like maybe discounting the type of activity that already existed and saying, ah, that's just sandbox, whatever. Like the real world adoption will come later and then we'll start focusing on it. Real world adoption is clearly here.
Ansgar:
[1:09:08] And so finding the right balance, I think, is the ongoing challenge. It's what, for example, Tomas and Chow Wei in their time at the Ethereum Foundation have like really put a lot of focus on. And I think that's how I would like narrate the future of Ethereum, both the Manhattan Project and the short term focus and ownership of the protocol as a useful thing today.
David:
[1:09:28] One theme that I've picked up on a handful of your answers throughout this conversation, Ansgar, is that there seems to be a significant number of second-order positive effects of the ZKEVM that are not related directly to the main questline of the ZKEVM, which is just straight layer one scaling, but solves a bunch of second-order problems, layer two scalability and composability being the one that you just said. How how big is that second order effect like am i correctly identifying that it's actually like somewhat large in the the positive second order effect
Ansgar:
[1:10:00] Number yeah i mean i think there's the immediate second order like the the as you said like the the things that like just the benefits to the broader evm ecosystem um especially evml2 ecosystem because again and well i guess maybe i, I think it's much easier to adopt or to benefit from this technology for L2s, for EVM L2s, whereas other EVM L1s, I think, while I think that's actually, it's also very exciting for them, I do think basically you'd have to re-architect your entire chain, right? Similar to how I was saying, like the Ethereum L1, the ZKVM is the core piece, but there's like many elements to it, right? Whereas because the L2s already have this architecture where they are just like naturally settling on the L1, they just have to compress the timeline, like the settling time. Like for them, it's almost like a trivial upgrade to follow us to this world. And like, so I really think there's the unique synergy for the Ethereum L1 and then the Ethereum EVML2s. I think longer term, if I'm talking beyond blockchains here for a second, I think we've already seen how in the world outside of crypto, we are starting to see this like second generation of cryptography really impact and it become very impactful.
Ansgar:
[1:11:11] It took a while. It took a couple of years for people to start taking it seriously. And so I think you can start to see it with all kinds of things like Microsoft is doing things like a lot of governments are doing, like zkid type of type of systems you're starting to really see use cases that go beyond just blockchains blockchains are like the most valuable so that's why we always we always see the technology there but you can you can imagine a world and especially once you have this real-time element unlocked you can you can imagine a world where like i don't know just to you know to be futuristic here like ai agents might use real-time zk proofs to make provable verifiable statements for trustless interactions with each other right like some of that might be on-chain for like you know for direct asset detections but some other things might just be literally just ah i'm just proving that i have access to this data and this data has the structure and that i or you know all these kind of statements you can just down trivially real-time proof that you just couldn't before so i think that's a five year down the road maybe kind of thing but five to ten years but that will come and that i think will be really exciting and then for example i don't know if you've seen this like more and more countries are starting to introduce i don't know social media bans for like miners and that kind of stuff and like usually that's implemented in a super dumb way you have to like just they use a service you have to upload your id to the service right and if we can replace that with like a zk id system where you really don't leak anything other than i own an id and my birth birth date is above this level this threshold like obviously that's a much preferable world so i think we are currently like i think.
Ansgar:
[1:12:38] Blockchains and especially the Ethereum ecosystem is currently like funding this massive leap of the cryptography toolkit that we have. And with some delay, five to 10 years delay, it will also hit the non-blockchain space. And I think it will be super impactful. Yeah. One, you know, idea I've had is that.
David:
[1:12:54] You know, Ethereum and all the research that we have invested in over the years, hopefully is one big contributing factor to like kind of restoring the brand of crypto by helping the world overcome some like generational challenges as you've, as you correctly identify, you know, you know, crypto doesn't really have the best brand at this present moment. But hopefully with some of these sci-fi tech advancements, this Manhattan project that Ethereum has been working on, we don't just improve the nature of our own blockchains, but we improve the nature of the world around us. And the second order effects upon Ethereum as a brand, as an ecosystem, ETH price is benefited downstream of all of that. Ansgar, this has been a super educational episode. I really appreciate you coming on here and giving me and the Bankless Nation the time about the ZKEVM. I think, you know, broadly, the crypto industry is looking for reasons to get bullish about something. I think this is a very valid thing to be excited about and to be bullish on. And so I'm trying to rally the troops around the ZKEVM fork just in mindshare in education. And I think you've done the job I've hoped we could do here on the episode today. So I thank you for that, sir.
Ansgar:
[1:14:06] Sounds good and one last caveat just to repeat this right like i'm not personally a zk expert i mean obviously i'm in the loop on a lot of these things but i'm more like our broader scaling expert so this is part of part of part of my job but but really we have we have absolutely amazing people so i'm sure i did i got like some of the minor minute details a little bit wrong and and the people will scream at their at their monitors but i hope i got the broader picture roughly right and i agree it's very exciting i think both the execution layer side the zkvm scaling story and And then on the consensus layer, like these next generation upgrades we're planning there. Very, very exciting. I do think we should understand, though, in this moment in time also... I think we should try to become more and more the boring infrastructure layer. And I think we should really like ready the stage for the applications. And so I'm personally like incredibly excited for like the actual real world application side of crypto. We are really starting to see this come online. Argentic payments, real world assets, stablecoin payments. All of this is incredibly exciting. I think it's a great moment to be in crypto. And yeah. And of course, one last shout out there maybe actually. if anyone listening to this was actually interested excited by these technical details of everything we talked about though and actually wants to help on the infrastructure side do reach out to me I don't know either on a Twitter DM or unscathedetherium.org we also always in principle are hiring if any smart kid out there like really would want to, join us here on the infrastructure side it's not the only exciting thing in crypto but it is still very very exciting and please come join us we'll.
David:
[1:15:35] Make sure your Twitter is in the show notes on YouTube or Twitter or wherever people are listening to this podcast. Ankar, thank you so much.
Ansgar:
[1:15:42] Thank you very much.
David:
[1:15:44] Bankless Nation, you guys know the deal. Crypto is risky. You can lose what you put in, but nonetheless, we are headed into the future. We're going to ZK the future too with the help of the ZK EVM. That's not for everyone, but we are glad you're with us.