 Thank you very much for coming. So today we're going to talk about sidechains and Layer 2. And the talk is about how they're different, yet very similar in some ways. My name is Georgios. I do consulting and research on off-chain protocols. And my main focus on interoperability and scalability. So the big question is, why do we want to interoperate? And this is the answer. There's a lot of currencies. They all want to talk with each other. If the multi-chain thesis is to survive in the long term, we have to figure something out. Why do we want to scale? So I just learned that there is a talk by Joe Lubin happening right now. Like, when one million is devs, who cares about the devs? What about the users? So there is no way to get the users without being able to scale with quick confirmation and quick low fees. So let's go back to where it all started. So blockchain released this paper. And I think it's the first time that sidechains was mentioned in the term. Turns out I'm wrong. Like, this never happened. The first mention of sidechains was by Satoshi in 2010, which he mentioned on Bitcoin Talk. As usual, like, that sidechains to do interoperability. However, what he actually was talking about was merge mining, which is a completely different technique. Three years later, we have Greg Maxwell, a very prominent Bitcoin core contributor who described this game called Coin Witness, which was very similar, which is very similar to what people call today, ZK Rollup. And then we have this guy called Killer Storm, who actually described the first sidechain construction by utilizing certain types of proofs. So the two-way peg, as described in the blockchain paper, and by Killer Storm, describes a mechanism by which somebody locks some money in an escrow, and then via peg-in transaction, you create the same amount of money on another chain, and that is a sidechain. And in that sidechain, you may have a different rule set, like, maybe the chain progresses faster, maybe it has smart contracts. And then when you want to get your money out, like, you provide a burn transaction, and then the people of that sidechain, they allow you to withdraw your money back on the original escrow. And there's multiple ways that you can implement that. So, like, let's dive into it. The main problem that we're trying to solve is, like, how we can observe another chain state and convince ourselves that this chain that we're being shown is legit. In Bitcoin, that's very easy. Work. Hashes are very easy to reason about. They're quantifiable. You can do some math and you can figure out exactly how hard it is to attack. The trick here is that, however, if the proof-of-work algorithm is very expensive, like, you need to be able to verify it. And doing it with any fancier, like, proof-of-work algorithm, it's very hard, and you cannot verify a script, for example, on EVM. And the way that you do this is via, I think, all the SPV proofs, the way that most light clients work, and there's a very close interaction between sidechains and light clients that I will get to very soon. So, instead of providing the full chain, you provide the chain of headers, along with Merkel proofs of your transactions. And what you're trying to convince is that the chain that you're giving is actually the longest chain. This is too expensive. This is too long. It's linear. I need to provide you, like, so that you're guaranteed that I'm giving you the best chain, like, without ever having any trusted checkpoint. I need to give you Genesis, a bunch of Merkel proofs, and you should be convinced. That's way too expensive. So, there's some techniques called NipoPause, the one technique called Superblocks NipoPause by Dionysus Zindros, and Flyclient by Benedict Boons, who should be around this area, so you should talk to them about that. Snarks, which Barry, like, alluded to earlier, and stateless SPVs. So, stateless SPVs are another technique by James Prestage, who's also around, and you should talk to him about it. The problem here is that all work is not equal. If I have the Bitcoin chain with a bunch of hash rate and a bunch of physics and the Ethereum chain with the GPUs and, like, a different hash rate, having an asset on the one chain and on the other chain, even if it represents the same, like, kind of, like, collateral, it's not actually the same. And I have developed this mental model where I want to think of cross-chain assets as alloys. So, it's similar to how in chemistry, like, you can combine one metal with another and get some different properties. I think that, like, moving a Bitcoin from the Bitcoin chain to some other proof-of-word chain, perhaps, like, tokenizing Bitcoin on Litecoin, for example, for faster confirmation times is an option. But, however, like, due to the difference in the hash rate, you're no longer as secure as you were before. So, you can call that BTC30. And what about the BTCX, where X can be TBTC, WBTC, you pick it. Some derivative off of the Bitcoin, which tries to peg, like, the price to it. And it's going to be used in the DeFi space, for example. However, the assumption here changes from, like, the honest majority of the miners to the honest federation. For example, if you're doing a federated peg or, like, to the whole mechanism around it working. So, you have technical risk arising. So, each solution has a different, like, trade-off space. There's no for lunch. What about proof-of-stake side-chains? So, proof-of-stake side-chains, like, don't exist. What about proof-of-stake like-lines? Which, the argument is that it's equivalent. This is a proof-of-work block. Many proof-of-work chains. Many proof-of-work blocks. And you accept a proof-of-work block only if the hash of the block header plus some nonce that you change is less than a number. So, let's take this and switch this in the proof-of-stake situation. You replace the nonce with signatures, and you accept, like, the block only if the blocks that you have received have signed, like, more than two-thirds of stake. So, how does this look like? How am I going to do a proof-of-proof-of-stake? I will pick some blocks, like, every some amount of time. I will check that each time the validator set would have changed in this proof-of-stake side-chain, I would verify that they signed on the latest block. This means that also, of course, it's linear because I have to give you still linear to the size of the chain blocks. And, but also, the side-chain smart contract or the light client must always be aware of the latest stake distribution. Because how will I know that who is the two-thirds that I'm receiving the money from? The signature from. There's an attack here, which I want to call the cross-chain nothing-it-stake attack, which basically says, usually, in proof-of-stake change, you have the nothing-it-stake problem where validators, like, start building on two chains. And basically, if you can take the data from one chain and put it on the other, you can slash them, like, for equivocating, for double-signing. However, what if I have a chain, like, I'm a validator, I have a chain that I'm building on, and then I'm just having a hidden chain that I'm also building on, but I'm not sending it to anybody except light clients. The light client must be able to take the signatures that I gave them off, out of band, and put them on the main chain. I'm not aware of any chain currently that has this mechanism implemented. Tendermint, right now, has a few documentation, has some documentation exploring this. They call this, if I'm not mistaken, the phantom validator, because, like, you're a validator, but not really, because you're sending stuff out of band. The issue, also, with this is the long-range attack, because what if, how, like, you need to, in order to incentive align this mechanism, you need to slash. How do you slash if the person that fed you with the signatures is now unbunded? It's complicated. So, like, the rule that you must say is that, like, you will only accept proofs, like, signatures, from people that are still bonded. You reject from unbunded validators. How do you know which validators are unbunded? I don't know. Like, currently the dominant approach is having a trusted checkpoint via, like, so that you always know what is the latest, like, bonded set of validators. This is an open problem. There's many solutions. There's many other solutions we can take, but they all, again, they all, like, tend to some subjectivity, which does not exist in proof of work. Everything so far, we assume that each chain is individually secure. If we're both, like, if both chains are secure, then sure, you can do, you can make them communicate with each other. Security and, like, requires that you have something that is high-cost. Something that is high-cost is not scalable. That has high-cost is not scalable. So, side-chain is constantly harmful. What if the side-chain, like, mechanism, if you try to use it for scalability, and if you try to use it for scalability inherently, it will be less secure, does not want you to get out. So, what if the liquid side-chain suddenly becomes the devil and, like, they say, no, like, your money's gone, you're done? So, we don't want that. The kind of taxonomy that we have for side-chains then means that we have the federated side-chain, which is the multisig liquid, the proof-of-work side-chain, which is with nipo-pause for logarithmic SPV proofs, and summary organization proofs, because, as we all know, I hope, when you have a proof-of-work chain that forks, like, you want to be able to punish for that fork, and you have proof-of-stake side-chains, where you have, basically, a multisig, which gets rotated each time the stake changes during elections, or, and you also add some equivocation. And the thing is that you trade always security for scalability if you want to use it that way. There is a great paper list, like, last week, actually, by, like, say, and friends on communication on course distributed ledger. Right now, they're doing a workshop on it, which was very unfortunate because it conflicts with us. Talk, so make sure you read this paper. To, like, elaborate on, to conclude on my point about side-chains and interoperability solution, it's not a scalability solution. You need an independent security model. And the moment that you have independent security model, my argument is that you're not in the layer two, domain space. It's a layer one that talks with other layer ones. It's, like, on the same level. How do we scale them? What's gonna happen? Off the chain. So Paddy McCurry is around here, like, everything that's working on, like, off-chain scalability requires that you have a layer one, a layer two, and some mechanism to make them communicate. You need to put the minimum amount of data on chain because the chain has a finite amount of space. And if you're going to support one million F devs, you're not going to be able to hold all that capacity. And maybe a theorem two is gonna do it, but what if it doesn't? So, what's layer two? So I call it a delayed settlement protocol with layer one guarantees. So you have a protocol where you log some money on the layer one, then you perform some off-chain operations, and then you have guarantees that are, like, equivalent to your layer one security model that you'll be able to get your money out. And there's two, like, dominant approaches right now, which is the commit-chains approach, which is, like, what Plasma is, roll-ups, no-coast, and they have, like, some certain different trade-offs, whether they can do smart contracts or not, and there's channels. And the dominant channel approach is the lightning for Bitcoin. And as far as I can tell, there is some state-channels initiative, which, like, everything merged and coulders to them because it was a very hard task. So, and they have, like, different properties, but we're not gonna talk about channels in this talk. So let's dive into the commit-chains. So firstly, we had Plasma, like, 2017 with Dalek Joseph, they published this paper, nothing in the paper worked, and it describes, basically, a mechanism where you have an operator that takes hashes of the side-chain and it puts them on the layer one. But, and the security of the commit-chain or the Plasma, like, it comes under the assumption that you're able that any time that, like, something bad happens, you can take some fraud-proof from the Plasma-Chain state, put it on the layer one, compare it to the hash that was committed earlier, and you can get your money out, like, within some time. And it has some security assumption that, like, you must be able, like, to get your money, to get the fraud-proof vein within a week's time or two weeks' time. So this is, like, a security assumption that very security-oriented folks will argue against, but, again, that's not the topic of the talk. The problem with the Plasma construction is that the operator, it's that there's sole discretion to give you the data. So what if they don't? Like, you have some state, they create a mercenary of the state or of the latest UTXO set, they commit it, but they don't give you the data. You have a problem then, because, in certain, they can commit, they can create an invalid state transition, and this invalid state transition will never be revealed, and, like, you will never know that you no longer have your money. So there have been, like, changes to, lately, to the Plasma-like protocols to fix that. And it turns out that maybe, yes, maybe Plasma was a premature optimization. Maybe, like, many people might have raised money on Plasma or something that might be broken. What can we do? That's how technology will move forward. So how are we going to solve the data availability problem? We cannot. So what are we gonna do instead of having off-chain data fraud-proofs? We'll put data on-chain with fraud-proofs, and that's what we call optimistic roll-up. And this is what the, currently, the Crypto-economics lab and the Plasma group teams are working on, which is basically that you take all the data that is off-chain, you create a smart, like, encoding of them, and you dump it on the cold data of the layer one. And that's kind of cheap, because the data is not part of the chain, it's just part of the database, like here. And the other people that are working on this, like, another, like, independently, like, thought of construction was called Merge's Consensus by Mikaela and John Adler, which basically says what I just said. You have, you commit, like, all the, like, the Merkle route, and also you put an encoding of the transactions. And you use, basically, the layer one as a data availability oracle. And then if you plug out the fraud-proof, and you put the validity proof, as ZK is snark, stark, like, whatever you want, whatever they call it these days, you have a ZK roll-up. Like, and I'm saying this, like, just so that we can get over the word salad and, like, understand, like, the bits and pieces that make a mechanism. ZK roll-up means that you have the commit, the new Merkle route or the latest state or the transaction that happened, you take a zero-knowledge proof, which attests that this state transition is valid, and you put it on chain. The smart contract verifies that the state transition was valid with MoonMath. The problem with this is that maybe you need a trusted setup, as we saw in snark, September, where, like, five new snarks were, like, released. Maybe we can remove the trusted setup, but I wouldn't build a cryptography that is released on 2019. And they can be expensive, like, proving times are expensive, verifiers are also expensive, and they can be slow. So, again, no free lunch. And note on the on-chain data availability, because it has been, like, pumped as an idea that, like, is gonna solve all our problems. I'm not a fan, because the blockchain is supposed to be the verification layer. The blockchain is not a file storage, like, and FileCo and another chains that are going to do that do not exist yet, so we know that, like, this is a hard problem and maybe won't get solved. The solution, solving the data availability problem, like, will give us the ability to do, like, DeFi and all the other, like, use cases that we're trying to figure out for this industry on the layer two, cheap and fast, but, like, on-chain data availability reduces your scalability benefits. You cannot have, like, infinite throughput, like, with the other layer twos, because, again, you're bounded by the layer one's capacity, so it's really a constant size improvement. And also, it's parasitic, in my view, by parasitic, I mean that, like, the moment that you start utilizing the change so heavily, what about the other apps that are not on your layer two, your roll-up, or whatever you call it? Are they all going to come to your roll-up? Like, I think that's too, like, ambitious. And there is a post by Vitalik recently, where he, like, elaborates on this, and he's a fan about this idea, which, yeah. Some takeaways from this talk is that we know how to do both proof of work and proof of stake side-chains. It's hard to implement them. You need them to both be individually secure in having an honest majority assumption for more than one layer one is hard, as we have seen for multiple 51% attacks on smaller, like, chains. Layer two inherits security from layer one, and, like, I have a small taxonomy of, like, what goes where. I'm a fan of the, like, direction. This is, like, the current direction that we're going, it seems, but we should be, like, very skeptic about how much, like, we're just dumping on the layer one, because nobody can sing in Ethereum full node, and this is not gonna help with it. The conclusion is side-chains are for interoperability. Layer two is for scalability. Side-chains are not layer two. Thank you very much, and I'll be happy to take some questions for three minutes. And also, yeah, I should have changed this. You can find the thought at scaling, at side-chains2019.pdf. There's a microphone if anybody wants to do a question, we'll have three minutes. Hi, thank you very much for the talk. So you gave, I think, very detailed criticisms of both layer two and side-chain solutions without highlighting too much of their positive aspects. What solutions are you actually fond of? I'm fond of the plasma construction for simple state transition for payments, because this industry solves payments, and I'm happy, like, if I can get a construction that can do multi-sig time locks, like, and that's it. And plasma does that fine right now. So plasma for? Plasma for simple payments. I do not care much about smart contracts like smart contracts. Multi-sig time locks are sufficient. Okay, thank you. Thanks for the breakdown. One thing that you didn't talk about, which I thought would be interesting to hear your perspective on, is hybrid approaches for roll-ups. So where we use, you know, proofs of work from other chains to make the data availability there, but do the fraud proofs on another chain. Right, so with, it's a question that I have a roll-up where I don't post all the data on chain, only for specific state transition. So it's using availability from something other than the L1 that you're bringing? No, I do not think that's good. Because your security model changes wildly. The security model that I want is, I have the layer one, I know exactly how secure it is. I know that I have the Bitcoin chain, it has this much censorship resistance, this much capacity, I can dump all this data. The moment that I'm going to use Bitcoin Cash as a data availability layer, which is what you're talking about, means that I have to trust Bitcoin Cash is miners. And do you know how easy it is to censor Bitcoin Cash? Very easy, like 10% of Bitcoin Cash rate. So no, like one chain for, if you're gonna dump data somewhere, dump it on one chain in my view. Okay, okay, thank you so much.