 Okay, hello, everyone. How are you? Happy anniversary of the Satoshi's White Paper, ten years. They are ten in binary counting. Okay, so today I'm going to basically talk about kind of Ethereum 2.0, but not just from a technical point of view, but more from the point of view of why Ethereum 2.0, what is Ethereum 2.0, and kind of how we got here, right? So, what is Ethereum 2.0, first of all? Ethereum 2.0 is this kind of combination of a bunch of different features that we've been talking about for several years, researching for several years, actively building for several years that are finally going to come together into this one coherent whole. And these features include proof of stake, Casper, scalability, sharding, virtual machine improvements, Iwazum, improvements to cross-channel contract logic, improvements to protocol economics, and really the list goes on and on, and there is some power distribution. So, lots of great stuff. Now, how did we get here, right? So, the road to proof of stake actually started way back in 2014 with this blog post that I published in January describing this algorithm called Slasher, which was introduced kind of the really most basic concept in a lot of proof of stake algorithms, which is this idea that if you get caught doing something wrong, then this can be proven, and you can be penalized for it, and how this can be used to increase security, but at the time, as you can see from the slide, I believe that quote Slasher is a useful construct to have in our war chest in case proof of stake mining becomes substantially more popular or a compelling reason to switch, but we're not doing that yet. So, at the time, it was not even clear yet that proof of stake is even the direction that we're going, but as we know now, over time, that changed quite a lot. So, what happened in 2014? So, first of all, went through a bunch of kind of interesting and aborted ideas. Proof of proof of work was this kind of suggestion to try to improve scalability. Hub and spoke chains. So, basically, you kind of have one chain in the middle and a bunch of chains on the edges. This was a kind of very early scalability and sharding proposal that tried to improve scalability for local transactions, but not for transactions that are global, so not for transactions that jump from one shard to another. Hypercubes. So, basically, accept the cube should have 12 dimensions instead of three. So, we can get even more scalability with hubs and spokes by going with hypercubes. Now, unfortunately, for various reasons, this idea ended up getting abandoned, but someone else has a big ICO to make it work, so happy someone is trying it out. So, in 2014, there was still some progress, right? So, there was this concept of weak subjectivity that we came up with, which was this kind of semi-formal security model that tried to capture the idea of under what conditions are proof of stake deposits and slashing and all of these concepts actually secure. Also, kind of, we got more and more certain that algorithms with much stronger properties than the proof of stake algorithms that existed at the time, so things like peer coin and all of its derivatives were actually possible, and a kind of growing understanding that there was some kind of proof of stake scalability strategy that you could somehow do through random sampling, but we had no idea how. And we had a roadmap. So, there was this nice blog post from Vinay Gupta in March 2015, where he outlines the four big stages of Ethereum's roadmap at the time. Stage one, frontier, Ethereum launching, yay. Stage two, homestead, which is kind of going from alpha to beta. Stage three, metropolis, which at the time was supposed to be about missed and user interfaces and improving user experience, but since then, the focus has switched more to enabling strong cryptography, but interface work is still going forward in parallel. And stage four, serenity, proof of stake. So, from now on, we're not going to call it Ethereum 2.0. I also refuse to use the word Chasper because I find it insanely lame. We'll call it serenity. So, after this came a bit of a winter, so we had a bunch of different kind of abortive attempts at solving some of the core problems in proof of stake, some of the core problems in scalability, research on Casper quietly began, Vlad kind of quietly began doing all of his work on Casper CBC. One of the first interesting ideas was this kind of idea of consensus by bet, where basically people would kind of bet on which block becomes finalized next, and then once more people bet on a block, that itself becomes information that gets fed into other people's bets. And so, the idea is that you would have this kind of recursive formula where more and more people would bet more and more strongly on a block over time, and after a logarithmic number of rounds, everyone would be betting their entire money on a block, and that would be called finality. We actually took this idea really far. We created an entire proof of concept for it, and you could see it finalized, and you could see here is a signature function. It burns most of our time on this, but then that and whole idea kind of ended up going away basically once we realized how to make kind of proper BFT-inspired consensus actually work sanely. Rent. So, this is the idea that instead of charging a big one-time fee for filling storage, we would kind of charge fees over time. So, basically, for every day or every block or whatever that some storage slot is filled, you would have to kind of pay some amount of ether for it. And there is this one EIP number 35 that I tried to call EIP 103, but really it was EIP number 35 because that was the issue number that takes precedence. And this is one of the really early ideas that trying to kind of formalize this concept, and we've had many iterations on the idea of how to implement and rent maximally nicely since then. There was also this scalability paper that we tried to do back in 2015. And this tried to kind of formalize the idea of kind of quadratic sharding, super quadratic sharding, but it was very complicated. It had these kind of very complicated escalation games. It had the a lot of them were kind of inspired by ideas of how escalation works in court systems, an analogy that I know Joseph with Plasma really loves to use as well, but we tried to use it for the base layer. Deep state reversions. So basically, if something goes wrong, then potentially large portions of the state could kind of get reverted even fairly far into the future. So there was a bunch of complexity, right? Now, one of the fundamental problems that we couldn't quite capture, but that we were kind of event slowly inching towards was this idea of the Fisherman's Dilemma. And this is a fundamentally and very fundamental concept in sharding research that basically describes the big difference between scaling execution of state and scaling execution of programs versus scaling availability of data. And the problem basically is that with execution of programs, you can have people commit to what the answer is, and you can later kind of play a game and try to kind of binary search your way to who actually made a mistake. And you can penalize everyone who made a mistake after the fact. The problem with data availability is that whatever the game is, you can cheat the game because you can just not publish any data at all until the mechanism tries to check if you published it and only then do you publish just the data that the mechanism checked for. And this turns out to be a fairly large flaw and a fairly large class of scalability algorithms. And I wrote this a blog post, it's called a note on a ratio coding and data availability that describes some of the issues in more detail. But still, this was one of the things that delayed us, but even still, we were happily making progress, Ethereum was moving forward, we were on our way. Wait, then this happened. Okay, no more problems. Oh, wait, then this happened. So the Dow hack, the DOS attacks, all of that ended up delaying a lot of people's time and attention by potentially up to six months. But even still, work moved forward, EWASM moved forward, the work on the virtual machine moved forward, and work on alternatives, things like EVM 1.5 moved forward. And people were still continuing to kind of get a better and better idea of basically what a more optimal blockchain algorithm would look like from many different angles. So after this, we started making huge progress and very quickly. So during all of this time, there were these different strands of research that were going on. Some of them around proof of stake and trying to do base layer consensus more efficiently. Some of them were around scalability and trying to shard base layer consensus. Some of them were around improving the efficiency of the virtual machine. Some of them were around things like abstraction that would allow people to use whatever signature algorithms they wanted for their accounts, which could provide post-quantum security. It would make it easier to make privacy solutions among a bunch of other benefits, protocol economic improvements. And really all these things were still happening all the way through. So at some point around the beginning of 2017, we came up with this protocol that we gave this very kind of unambitious name of a minimal slashing conditions. And minimal slashing conditions was basically a kind of translation of PBFT-style traditional Byzantine consensus. So the same sort of stuff that was done by Lamport, Dworklyn, Stockmire, and all those wonderful people back in the 1980s, but simplifying it and kind of carrying it forward into more of a blockchain context. So the idea basically is that in a blockchain, you just keep on having these new blocks appear over time, and you can gain these nice kind of pipelining efficiencies by merging sequence numbers and views. Every time a new round starts, you would add new data into the round. You can also have the second kind of round of confirmation for one piece of data be the first round of confirmation for the next piece of data, and you can really kind of get a huge amount of efficiency gain out of all of that. So the first part was the step was minimal slashing conditions, which had six slashing conditions. Then it went down to four. And finally about half a year later, we ended up merging prepares and commits. And this gave us Casper, the friendly finality gadget, Casper FFG. So last year at DEF CON, I presented this new sharding design that basically kept the main chain and then created sharding as a kind of layer two system on top of the existing main chain that would then kind of get upgraded to being the layer one once it gets solid enough. From Vlad, the Casper CBC paper, the Casper FFG proof of concept. December 31, 2017, 2340, Bangkok time, because we happened to be in Thailand at the time. Basically what happened here is we pretty much managed to nail down what the spec of a version of hybrid proof of stake would look like. And this version of hybrid proof of stake would basically use the ideas from Casper FFG, use these kind of sharding as a kind of traditional Byzantine fault tolerant consensus inspired ideas of how to do proof of stake on top of the existing proof of work chain. So this would be a mechanism that would allow us to get to hybrid proof of stake fairly quickly with actually a fairly minimal level of disruption to the existing block chain. And then the theory is that we would be able to upgrade to full proof of stake over time. And we got very far along in this direction. And there was a test that there were Python clients, there were messages going between different VPSs and different servers and different computers. And it got very far. And at the same time, we were making a lot of progress on sharding. So we continued working on the sharding spec. Eventually we had this retreat in Taipei in March. And around here, a lot of the ideas around how to implement a sharded block chain seemed to solidify. Seemed to solidify. So in June, we made this kind of very difficult, but I think in the long-term really very beneficial and valuable decision, which is that we said that, hey, over here, we have a bunch of teams that are trying to implement hybrid proof of stake. And they're trying to do the Casper FFG thing, build the Casper FFG implementation as a smart contract inside of the existing block chain, make a few tweaks to the fortress rule. Then over here, we had a completely separate group that was trying to make a sharding system that was trying to make a validator manager contract that was later renamed into a sharding manager contract on the main chain. And that was trying to build a sharding system on top of that. These groups were kind of not talking to each other too much. Then on the sharding side, it eventually became clear that we would get much more efficiency by making the kind of core of the sharding system not be a contract on the proof of work chain, but instead be its own proof of stake chain. And because that way we could make validation much more efficient, we did not have to deal with EVM overhead. We did not have to deal with gas. We would not have to deal with unpredictable proof of work block times. We could make block times faster, along with a whole bunch of other efficiencies, and we realized, hey, we're working on proof of stake here. We're working on proof of stake here. Why are we doing two separate proof of stake things again? And so we decided to just merge the two together. This did end up kind of nullifying a lot of work that came before, but what it meant is that instead of working on these two separate implementations, we were working together on this one spec, this one protocol that would get us the benefits of Casper proof of stake and sharding essentially at the same time. So basically, instead of trying to go to one destination, then go to another destination, then later on have this massive work of figuring out how to merge the two, we would just take a path which would take a little longer at the beginning, but the place that it gets to actually is a proof of stake and sharded blockchain with the properties that we're looking for. So in the meantime, we spent a lot of time arguing about fork chouist rules. We ended up kind of getting closer and closer and deeper into realizing that fork chouist rules based on ghost, the greedy heaviest observed subtree algorithm that was originally intended for proof of work, but repurposed by us for proof of stake made a huge amount of sense. We were, Justin started doing research on verifiable delay functions. We had this workshop at Stanford and we made a lot of progress on verifiable delay functions there and Justin is still collaborating with a lot of researchers from there. More ideas on how to do abstraction, how to do this idea where individual users could choose their own signature algorithms for their accounts. More ideas on rent, which we decided to rename to storage maintenance fees for political reasons. And research. So there is a lot of work on cross-shard transactions. So, for example, there is this post I made on cross-shard contract yanking, which kind of generalizes the traditional distributed systems concept of locking into something that makes sense in this kind of asynchronous cross-shard context. Also wrote this paper on resource pricing, which includes ideas about a kind of optimized and much more efficient fee market, along with storage, how to do storage maintenance fees and why, and the different trade-offs between different ways of setting them. And Casey here at DTRIO wrote this post on doing synchronous cross-shard transactions. And of course, in the meantime, Casper CBC research also expanded into kind of Casper CBC's own brand of sharding, which is totally not called velarding because Vlad absolutely hates that term. So, development, right? So, there's one of the kind of key strategies that we tried to really push forward in the Ethereum 2.0 path is the idea of kind of multi-client decentralized development. And this wasn't just a kind of because we have an ideological belief in decentralization. This was also a kind of very pragmatic strategy to basically hedge your bet, first of all, hedge your bets against the possibility that any one software development team would not perform well. Second, because we already, we have plenty of experience from the Shanghai DOS attacks of how there are plenty of cases where if one client has a bug, having other clients being available allows the network to continue running better. Also, wanting to kind of make the development ecosystem less dependent on the foundation itself. So, the client that the Ethereum foundation works on is actually the Python client. And so, it has plenty of use cases, but in Python, just as a language has inherent performance limitations and so, there's always going to be an incentive to try running the stuff built by the wonderful folks at Prismatic and Lighthouse and Status and Pegasus and all the other teams that are popping up seemingly every month. So, soon, something which is totally not going to be called Chasper, Serenity Begins. Yay! Right, so, what is Serenity? So, first of all, it's the fourth stage after Frontier and Homestead and Metropolis, and where Metropolis is broken down into Byzantium and Constantinople with Constantinople coming very soon as well. And it's a realization of all of these different strands of research that we have been spending all of our time on for the last four years, right? So, this includes Casper and not just Hybrid Casper, 100% organic, genuine, pure Casper. Sharding, E-Wassam, and all of these other prion of protocol research ideas. This is a new blockchain in the sense of being a data structure, but it has this kind of link to the existing proof-of-work chain, right? So, the proof-of-stake chain would be aware of the block caches of the proof-of-work chain. You would be able to move Ether from the proof-of-work chain into the proof-of-stake chain. So, it's a new system, but it's a connected system, and the kind of long-term goal is that once this new system is stable enough, then basically all of the applications on the existing blockchain can be sort of folded into a contract on one shard of the new system that would kind of be an EVM interpreter written in E-Wassam, and not finalized, but this seems to kind of roughly be where the roadmap is going at this point. Serenity is also the world computer as it's really meant to be, not a smartphone from 1999 that can process 15 transactions per second and maybe potentially place nake. And it's still decentralized, and we hope that in many metrics it can be even more decentralized than today. So, for example, as a beacon chain validator, your storage requirements at this point seem like they'll be under one gigabyte as compared to something like eight gigabytes of state storage today, and the 1.8 terabytes that trolls on the internet seem to think the Ethereum blockchain requires for some stupid reason. Expected phases. So, phase zero, beacon chain proof-of-stake, and beacon chain proof-of-stake is kind of, the blockchain is not kind of hold any information, right? It's kind of like a dummy chain. So, all that you have is you have validators, and these validators are executing and they're running the proof-of-stake algorithm. So, this is kind of like halfway between a testnet and a mainnet. It's not quite a testnet because you would be able to actually stake real ether and earn real rewards on it, but it's also not quite a mainnet because it doesn't have applications, and so if it breaks, people are hopefully not going to cry too badly as they would, as they did when the Shanghai DOS attacks made everyone's ICOs go slowly. Phase one, shards as data chains. So, basically the idea here is that this is where the kind of sharding part turns on, right? So, here it's just, it's a kind of simplified version that doesn't do sharding of state, it does sharding of data. So, you can throw data on the chain, you could try to make just do a state execution engine yourself, but really the simplest thing to use it for is data. And so, if you want to do a decentralized Twitter on a blockchain, you'll now have the scalability to do this, but you won't really have the kind of all of the state execution capability to build smart contract applications and all of the really fancy complex stuff. Phase two, enable state transitions, and this includes enabling the virtual machine, enabling accounts, contracts, ether, ether moving between shards, all of this cool stuff. And phase three and beyond, iterate, improve, add technology. So, expected features, peer proof of stake, faster time to synchronous confirmation, so about 8 to 16 seconds. Now, notice that because of how the fork choice rule and the signing mechanism works in the beacon chain, one confirmation in the beacon chain involves messages from hundreds of validators. And so, from a probabilistic point of view, it's actually equivalence to hundreds of confirmations of, say, the Ethereum proof-of-work chain. So, under a synchronous model, you should be able to treat one block as being close to final. Economic finality and safety under a synchrony comes after 10 to 20 minutes. And fast virtual machine execution via eWASM, and hopefully a thousand times higher scalability, we will see. Post-serendity innovation, improvements in privacy. So, there has already been a lot of work done. So, for example, in Byzantium, we activated the pre-compiles for elliptic curve operations and elliptic curve pairings, and Barry White had spent doing great work on building layer 2 solutions to preserve privacy of coin transfers, voting, reputation systems, and all of this work could be carried over. Cross-sharp transactions, semi-private chains, so the idea here is that if you want to build some application where the data is kept private between a few users, you could still dump all the data on the public chain, but you would just dump it in an encrypted form. Or you could dump hashes of it and use your knowledge proofs, like your choice. Proof of stake improvements, there's definitely a place in our hard and the road map's hard for CASPER CBC when it becomes clear that it's like, that there is a kind of a version that makes sense from an overhead point of view. Post-serendity innovation, so at some point we have, well, we want to, and we do have a door open to kind of upgrade everything to Starks, so using Starks for signature aggregation, for verifying erasure codes for data availability checks, maybe eventually for checking correctness of state execution, maybe stronger forms of cross-sharp transactions, faster single confirmations, getting the confirmation time down from 8 seconds even lower. Medium-term goals, eventually stabilize at least the kind of functionality of layer 1, think about issuance, think about fees, agree more and more over time on kind of what specific guarantees people expect from the protocol and things that people expect as features for the protocol to have for a long time, think about governance. Now, what's next immediately, so what happens before kind of the big launch? Well, first of all, stabilizing the protocol spec, so for those who have been watching this for a long time, you can go to www.hub.com. This spec has been kind of moving fairly quickly, but it will stabilize fairly soon. Continue development and testing, there's something like eight implementations of the Ethereum 2.0 protocol happening now. Cross-client test nets, so I believe Afri made a statement that he hopes to see cross-client work really picking up in Q1 next year. We'll see. That would be definitely nice to see a kind of test net working between two implementations. Honestly, it would be nice to see a test net working between one implementation. Really, so as a kind of quick history, I'm aside. The Ethereum 1.0 development, time between conception of the white paper and launch, 19 months, part of the reason why it took so long is because we tried to get kind of cross-client compatibility way before the spec even finished. And so we had to agree, test, release the test net, await protocol changes, agree test, release the test net, await more protocol changes, and we had about five cycles of this. This time around, we have the luxury of kind of learning from that lesson, and we don't really need to kind of focus on cross-client compatibility until we have something close to a release candidate of the spec. But I think we're actually not that far from a release candidate of the spec, at least for kind of limited portions that don't include a state execution. So we'll see. Security audits. Who here think that's a good idea? We'll see. Security audits. Who here think security audits are important? Who here think security audits are not important? Who here thinks the world is literally controlled by Lizardman? Okay, so more people for the third of the second. That's good to hear. And then once we're done with that, launch. Who here thinks launching is not important? Okay. Who here thinks that your favorite political candidate is literally a Lizardman? Okay. So launch. There we go. That's basically the milestone that we've all been waiting for, that we've been working toward for the last four to five years, and a milestone which is really no longer so far away. So thank you. What? Oh, sorry. Okay. Enjoy.