The Swyx Mixtape : Scaling Blockchains [Vitalik Buterin]

ABOUT THIS EPISODE
Source: https://www.youtube.com/watch?v=XW0QZmtbjvs

Transcript

[00:00:00] swyx: Part two of my cryptocurrency exploration is on scaling blockchains. And I think if metallic is probably the best and most articulate person to talk about this.

[00:00:09] Vitalik Buterin: There’s two major paradigms for scaling blockchains. Right? As you said, we are one and layer two. And layer one, basically means, make the blockchain itself, like it capable of, uh, processing more transactions by having some mechanism by which you can do that. Despite the fact that there’s a limit to the capacity of each participants in the blockchain.

[00:00:29] And then what you’re two says, while we’re going to keep the blockchain as. But we’re going to create clever protocols that sit on top of the blockchain that still use the blockchain. And it’s still kind of inherit things like the security guarantees of a blockchain, but at the same time, a lot of things that are done off chain.

[00:00:45] And so you get more scalability that way. Um, so. In Ethereum, the most popular paradigm for layer two is roll-ups and the most popular paradigm for layer one is charting.

[00:00:54] Lex Fridman: So one way to achieve layer one scaling is to increase the block size, hence the block size wars quote unquote, and a, you actually tweeted something about.

[00:01:06] Uh, people are saying that Vitalik changed his mind about in, he, he went from being a S

[00:01:13] Vitalik Buterin: small. I went from being big to small. Is it big to small?

[00:01:16] Lex Fridman: And, uh, but you said I’ve been a medium blocker all along. So maybe you can also comment on, on work, on the very basic aspect before we even get to sharding of where you stand.

[00:01:27] Block

[00:01:28] Vitalik Buterin: size debate. Sure. So the way that I think about the trade-off, as I think about it as a trade-off between making it easy to write to the blockchain and making it easy to read the blockchain. Right. So when I say read, I just mean, you know, have a node and actually verify it and make sure that it’s correct.

[00:01:43] And all of those things. And then by right. I mean, send transactions. So I think for decentralization, it’s, it’s important for both of these tasks to be accessible. And I think that they’re like about equally important, right? If you have a. Too expensive to read, then everyone will just trust a few people to read for them.

[00:01:59] And then those people can change the rules without anyone else’s permission. But if on the other hand it becomes really expensive to write. Then everyone will move on to like basically second layer systems that are incredibly similar. And that takes away from, you know, decentralization and self sovereignty as well.

[00:02:18] So this has been my viewpoints, like pretty much the whole time, right? It’s like, you know, you need this balance and going in one direction or the other direction is very unhealthy in the Bitcoin case. Um, basically what happens was that Bitcoin originally. Like at the very beginning, it didn’t really have a block size.

[00:02:33] It just had an accidental block size of 32 Meg or oxides limit of 32 megabytes because that just happens to be the limit of the peer-to-peer messages. Um, but then I didn’t even know that part. Yeah. But then, um, so Toshi back in 2010 was worried that even 32 megabyte blocks would be too hard to process.

[00:02:51] So he, uh, put the limit down to one megabyte and, you know, I think the. You mean sneaked in there? Yeah. Just like made an update to the Bitcoin software that made blocks bigger than one. I think it’s a million bytes invalid. And I think the impression that most people had at the time is that, you know, this is just a temporary safety measure and overtime, you know, as we become more confident in the software, that limit would be like raised some, uh, somewhat.

[00:03:21] Um, but. That then when the actual usage of the blockchain started going up, and then it started going up first to 100 kilobytes per block, then to two 50 kilobytes per block, then to 500 kilobytes per block. Now let me know there started a kind of coming out of the woodworks, this opinion that like, no, that limit should just not be increased.

[00:03:42] And, and, you know, then there are all of these attempts at compromising, right? Um, No first, there was like a proposal for 20 megabyte blocks. Then there was the two for eight proposal, which is, um, a bit ironic because the 2 48 proposals started off being like a small block negotiating position. But then when the big law people came back and said like, Hey, why aren’t we aren’t we going to do this?

[00:04:05] They’re like, oh no, no, no, we don’t want them. We don’t want the block size increases anymore. Uh, so, you know, there were these two different positions, right? The small blockers, I think they valued one megabyte blocks for two reasons. One is that they just like really, really believe in the importance of being able to read the Chan.

[00:04:22] But two is that a lot of them really believe in maintaining this norm of never hard forking. Right? So the difference between a hard fork and a soft fork is basically that Ian, a soft fork, um, blocks that were. Any block that’s valid under the new rules that were still valid under the old rules. So if you have a client that verifies according to the old rules, then you’ll still be able to accept the chain that follows the new rules.

[00:04:48] Whereas with a hard fork, like you have to update your code in order to stay on the chain. Uh, huh? They have this belief that it’ll soft forks are kind of either less coercive than hard forks, which by the way, I completely disagree with, um, I actually think soft forks are more coercive because like basically they force everyone who disagrees to sort of go along by default.

Rollups

[00:05:11] Vitalik Buterin: So this might be a good time to talk about roll-ups. What

[00:05:13] Lex Fridman: are roll-ups? Okay. Now we’re moving into layer

[00:05:16] Vitalik Buterin: two ideas. So the idea behind a roll up is basically that. So instead of. Just publishing transactions directly on chain and having everyone, you know, do all of the checking of those transactions. Um, what you do is you create a system where users send to their transactions to some central party called an aggregator.

[00:05:44] And like, well, theoretically, you could have a system where like the aggregator, so which is arounds or where anyone can be an aggregator. So, you know, it’s still like permission to us to send things. Um, then what the aggregator does is. Strip out all of the transaction data that like is not relevant to helping people update the state.

[00:06:03] So when I say the state, this is like, this is a very important, it’s kind of technical term from blockchains. I mean like account balances, code, um, like things that are. Memory internal memory of smart contracts. So like basically everything, the blockchain actually has to keep track of it. Right. So ju you’re just still put in, um, you take it, oh, these transactions strip out all the data.

[00:06:27] That’s not relevant to telling people how to update the state. And then you take the data that that’s needed to update the state. And then you were like really compress it. Right? So like, for example, if we say. I Vitaly like having accounts that’s zero X, AB five eight, blah, blah, blah, blah, blah. And it’s 20 bytes.

[00:06:43] Well, instead we can say, well, I have an account that is number 1, 8, 7 4 2, 2 4 in the tree. Right. And that goes down from 20 bytes to just like an index and a position, which is three bytes. Right? So you use all sorts of these fancy compression tracks, then you basically just, instead of publishing all these transactions, you publish this like tiny compressor.

[00:07:03] So the amount of data that goes on chain goes down by maybe about a factor of 10, right? And then the second thing is that you don’t do the computation on chain instead of you do the computation off chain. And there’s one of two ways to do this, right. One is called a ZK roll up, which is you would just provide as EK snark that basically says, Hey.

[00:07:21] I did this computation. And, uh, and I have this proof that here’s the rate, here’s the, you know, some hash of the result and it’s correct. And then you stick it on chain and everyone verifies this one proof instead of verifying all these transactions. And then the other approach has called an optimistic roll up, which is basically made of this scheme where like first, someone says like, Hey, this is what I think the results of the, of, uh, applying these transactions is.

[00:07:46] And then someone else can say, I disagree. The result is different. And only if two people. Do you actually do it? Do you actually just like publish all of the data and run the whole, that, that whole blog on chain. So if there’s disagreements, then you would just like run everything on chain and whoever was wrong.

[00:08:02] Like, it was just a lot of money. So like, disagreements are very rare and they’re very expensive. And then as he key roll up, you don’t even rely on this way, challenging him in game at all. You just rely on him. So the core principle is basically that instead of lots of transactions and all the trends that everyone verifies, every transaction, it is you take the transactions, you strip, strip them down and compress them as much as possible.

[00:08:25] Then stick that on the block. You do need to stick something on the blockchain and just so that everyone else, everyone else can like, keep it going to keep up to date with the state. So they know, you know what all the contracts are, what all the balances are and all of this, but it’s a very small amount of data.

[00:08:38] And then you use some, one of these other off-chain games, you know, could be this optimistic game, could be as easy as snark to just prove that somebody out there did the computation and the result is correct. So you’re pushing like 90% of the work off chain and yeah. No. Well, 90% of the data and 99% of the computation off chain, and then you still have 10% of the data and 1% of the computation on Chan.

[00:09:01] And so, you know, your scalability goes up by a factor of about a hundred. So these systems are already alive for, for some applications, right? So there’s something called Lupron, which is just a ZK roll up for payments, right? So you can have, you know, assets inside of, uh, inside of the looping system. And you can go around and transfer them, but, uh, what you, um, and you get like much lower transaction fees, right?

[00:09:29] Like instead of $5, you’d have to pay like less than five sets. Um, but the only problem is that this only supports a couple of applications right now, like making one that supports anything that you can do on Ethereum just takes a bit more work. That’s being done as well. Right? So like within a few months I’m expecting, you know, fully Ethereum, um, and capable, um, roll-ups to, um, be available as well.

[00:09:54] So it’s a roll roll-ups just summarizing, you know, do most of the work off Jane we’ll put only a little bit on chain factor of a hundred scaling sharding and other factor of a hundred scaling, a hundred times a hundred factor of 10,000. No hundreds of thousands of transactions a second, and look, you know, there’s your scalability.

Leave a comment

Your email address will not be published.