 distributed consensus. My name's Elaine Shi, I'm a professor at Cornell, and I'm very honored to moderate this panel. We have four distinguished panelists to begin by having the panelists each spend two minutes to introduce themselves, and also maybe a short, concise statement or pitch that you would like to make regarding distributed consensus. And then I'm going to ask questions and I'm going to open it up for the audience to ask questions. So because we have a big room, for those of you who want to ask questions, I'm going to ask you to move to the front so everyone can hear you when you ask questions. Okay, all right. So hello everyone, who here does not know my name? Okay, I see some hands raised, good. I'm Vitalik, chief scientist of the Ethereum Foundation. So earlier this morning, I gave an introduction to basically what Ethereum is from a technical standpoint, but I also mentioned that later today I'd be talking a bit more about some of the projected things that we want to do in the future of Ethereum. And one of the big threads of kind of research and developments that we have at the Ethereum Foundation is CASPER. So CASPER is this kind of category of proof of stake consensus protocols that we have been thinking about and developing researching over the last few years that are kind of based around a couple of principles, right? So one of them is obviously decentralization and visiting fault tolerance, which is really a synonym for decentralization because both words basically mean that one single guy can't screw up and break the whole thing. Then we also have this idea of economic security that I'm sure Vlad will talk about much more and could really credit him for kind of pushing hard on putting that at the front of the agenda. So personally, I have been spending my time more specifically on pushing forward with CASPER the friendly finality gadget or CASPER FFG. So this is one of the two kind of algorithms in the CASPER family. And the goal of CASPER FFG is to be something that's as simple as possible and as simple to graft onto the existing proof of work chains as possible, while at the same time, theoretically being a fully-fledged and provably safe under a synchrony of visiting fault tolerance consensus algorithm. So the idea is that it takes sort of PBFT like lampboards, visiting generalist paper like ideas, and basically translates them into a blockchain context and really simplifies them substantially to the point where as Chiang Rui showed earlier today, the kind of consensus rules, slashing condition, fortress rules are fairly simple to implement. So the goal of CASPER FFG is to basically be proof of stake that can be implemented fairly quickly overlaid onto any proof of work chain today with an I2 in later stages of the roadmap and like implementing more advanced things like full proof of stake. So getting rid of the proof of work completely as well as things like some of the more advanced features that Vlad wants to include. Yeah, thanks for talking. Yeah, so I definitely feel like the proof of stake research that we've been doing over the years has two kind of bifurcations like the economic stuff and the consensus stuff and at the end of the day end up being really tightly connected because the limit to our ability to incentivize the nodes is to infer what behavior they actually did and the limits there are actually limits to fault attribution and that has to do with really the mechanics of the protocol, right? Which is why we want not just to say prove that it's fault tolerant but prove that when there are faults, we can find them, right? So that we can penalize the faulty nodes so that not only can we tolerate faults but we also disincentivize them. So, and I think that like the both sides of the research have kind of come a long way and actually feed into each other actually in a very kind of tight way. To me, like today, the thing that's the most exciting to me is kind of the stuff that I kind of just showed, right? Like the fact that I think we can get, I mean, I'm sure we can get asynchronous safety with any like pick your fault tolerance threshold with an overhead of locomotor consensus is I think really cool and something that is, I think new technology. And I kind of want to echo what Vitalik said that I think the simplicity of the protocol specification and the simplicity of the proofs is like kind of paramount to our like design philosophy or our design goal. Hello, I am Peter from Parity Technologies. So at Parity we've been implementing, we've been working on consensus and on implementation in particular of consensus for a while. And we kind of taken a step-by-step approach. Now it's somewhat converging to a lot of research that is coming out, but we started by separating the validator set. So the piece that decides who is the current set of validators. So when it might be determined by their stake or by support of other parties. So that would be proof of authority. And then the consensus algorithm itself. And then we started off with a very kind of simple consensus algorithm that was based on a, on basically issuing proposals in a round. So it is somewhat similar to what Vlad talked about at the end where validators kind of issue blocks sequentially. And then we added finality. So that was basically a module that then looks at the blockchain and any particular point and sees who signed off on with chain. And basically if someone signed off on a chain then that contributes to finalizing that particular chain. And now I think the other things that will be interesting is introducing new message types. So enabling not only validators to finalize blocks not only by releasing blocks but also releasing just simply statements of validity of a particular chain speeding up how fast the blocks can be finalized in this types of algorithms. And of course working more on the validator set. So more working more on the economic side of those protocols. And the validator set as are right now they support providing misbehavior proofs and those misbehavior proofs can depend on the type of consensus engine. So they might be double signing and then the verification of the misbehavior proof happens on chain. And then slashing can be done based on that. Okay, well thank you very much for having me here. My name is Imin Gunsira. I'm a professor at Cornell University. I'm also a co-director at the initiative for cryptocurrencies and smart contracts. So there are a couple of things that people should maybe know about me but the most important one beyond anything I've done in the past. Like I did play a pretty early role in cryptocurrencies and something called karma in 2002, 2003. It was one of the first implemented system with a proof of work minting in it. But beyond that, so many of you know that I played a role in anticipating the Dow hack and so forth. But the most important thing I think is the following. It's I think partly why I'm so excited to be here. This is a very science-driven community and in much of my work, what I really value is science-based design with strong guarantees. So this is a domain where there are many, many, many, many reasonable sounding protocols that anybody could just come up with at any one time. You know, I essentially reach into my gut and I give you something or A sends a message to B sends a message to C then this happens and that happens. And there's been no shortage of white papers containing these kinds of what we call design by gut or wishful thinking design or sunny day design. And those are fine. They might work. But as we often find out, the devil is in the details and what you should all demand as a community are actual hard proofs. Within the absence of those proofs, these systems typically tend to falter. So in every answer that I give to any question is colored by the science-driven design. I'm very much restricted in what I can say publicly and privately because I know, I have to just say things that I know to be true. That has allowed me to actually be very, very correct and prescient in some ways. But it also means that I demand a higher bar for protocol design and I hope you will too. And I think what we're seeing happen is Casper is gradually getting to the point where it's actually beginning to reach that far. And we're finally getting to the point where we understand the workings of the protocol better. And we will hopefully soon have something ready to roll out. Okay, well, thank you so much for the excellent statements. And I'm going to begin by asking a couple of questions. And if you would like to ask a question, I'm going to ask you to maybe move to the front at the yellow line. And then me will be able to pass you the mic. Okay, so my first question is regarding the debate between proof of work and proof of stake, right? So we all know proof of work is probably in the longer term, not what we want because of the enormous energy waste. But there have been a lot of debates in the community about the security of proof of work versus the security of proof of stake. You know, some argue proof of work is more secure, others argue proof of stake is more secure. So for Ethereum, for instance, one reason why you guys are moving to proof of stake, I mean, obviously, other than the energy waste, you also want to have an incentive-compatible protocol. But I wonder what you guys think of this question, like the security of proof of work versus proof of stake. In particular, imagine I'm not a rational player. I just want to attack the system and break the consistency. Do you think like the proof of stake approach actually increases the cost of such an attack? Yeah, sure, I would love to. So firstly, I would say like, okay, proof of work is not asynchronously safe, right? And it's one of the main differences actually between proof of stake and proof of work is that in proof of stake, you can actually achieve like finality, right? And like actually decisions that are not going to be reverted and that are safe in asynchronous. I think like that increases security a lot. And I think that in terms of the damage that an attack could do, the fact that blocks can't be reverted arbitrarily really mitigates that. And then additionally, the fact that when they do, when the safety failure does happen, the faults are attributable and the nodes who attack can lose money means that the nodes who are attacking can operate at a higher cost than nodes who are not attacking, which is like not the case in proof of work. So I think for those reasons and more, like proof of stake is like gonna be more expensive to attack and your attacks gonna be more limited. Yeah, so let's say that you are just to be politically neutral, the government of the Democratic People's Republic of Korea and you want to basically take, destroy a proof of work auction, right? So what do you do? Step one, like one, we could just imagine a very full direct attack, right? So you would get a billion dollars and you know, okay, fine, some of your people can go without food, you get a billion dollars and you buy a whole bunch of Asics and then you just launch a 51% attack on the network. Now, what are the developers going to do in response? Well, they have exactly one strategy which has changed the proof of work algorithm. So let's say they do that, right? So what happens? This is basically kind of literally the best possible counter move because if you don't do that, then they can do what's called a spawn camp attack, which is basically to just keep on attacking over and over again and they make your chain permanently useless. So you change the proof of work algorithm, which means the Asics become useless, which means that the attacker has lost a billion dollars but all of the good guy miners lost all of their money as well, right? So that's one area where proof of work already started such a weakness because it means that if an attack is going to happen, then there's actually fairly little incentive to be on the non-attacking side versus being on the attacking side. So then okay, it's down to GPUs. Now let's say you get another billion dollars and you basically corner the market on GPUs. Well what is, okay, you launch another 51% attack. Now this time GPUs are general workers hardware and so the developers have no other counter move, right? Basically step three of the game and that's game over. They can just, you know, the North Korean government could just keep on doing the spawn camping, can keep on the censoring transactions, doing 51% attacks. You cannot change the proof of work algorithm again because if you change it again, then well for the new algorithm GPUs are gonna work for it as well. So this kind of puts an upper bound on basically the amount of damage that a proof of work chain can take which basically means that a proof of work chain in order to survive has to make this cost of attack very high which is basically this kind of doctrine of survival through domination and massive hash power. Now let's look at proof of stake. Okay, so Uncle Kim buys up $100 million of Ether. Then, okay fine, BFT 34% balance so you can definitely launch a 34% attack or a 51% attack. So okay, you launch an attack and let's say this breaks finality or this censors transactions for a while. Well guess what, the economics of Casper are designed in such a way that if you do this then worse comes to worst, the community can coordinate a hard fork. So this is a totally legitimate move because even the proof of work people agree that changing the proof of work is a legitimate move to counter a 51% attack and that's also a hard fork. So the community does a hard fork to hard fork away from the attack. Now the difference here is first of all the attacker actually does lose most or all of the money they use to attack and second, anyone who did not participate in the attack does not lose any money. So okay, step two of the game. Basically, three days of chaos happened, the community recovers, there's a hard fork. Well okay, buy up another $100 million, do another attack. Okay, three days of chaos, community recovers. Do another, buy up another $100 million of Ether, do another attack. Well soon enough the community is going to realize that this is what's happening. Eventually Uncle Kim is going to run out of money and so they're just going to sit quietly and people are going to start buying up Ether and each and every one of these attacks is going to keep increasing the price of Ether because it's basically taking $100 million of Ether off the market each time and permanently destroying it. So eventually you run out of money and the community wins, right? So this is kind of, basically the moral of the story is that because in proof of stake, the staking coins are defined in, or the assets used to stake are defined inside the system. The result basically is that you can make these rules that you have a huge amount of flexibility to make these rules that are basically extremely lopsided in the defender's favor. And that basically means that if the attacker has some amount of money, then they can basically break the system maximum amount of times. Now one really nice goal for this is of course to see if we can actually formally prove the balance on amount of money you have divided by the number of times you can force the community to hard work, which would by itself be a very interesting challenge, at least that by itself shows the asymmetry. So to pile on the both blood and the italics responses, I think what's going on here is fairly straightforward. With proof of work, we're at the mercy of hardware trends. That's it. The control of the security of the system is out of our hands. As trends change, as people invent new technologies with let's say with various process level tricks to get to pack more transistors, to pack better hashes into the same dye area, as people come up with better heat cooling, et cetera, tricks, we're going to see hash power shift hands, and we will have absolutely nothing in our hands. No knobs we can twist to control what happens. I think everything that you heard from Vitalik was exactly fundamentally rooted at that cost. And with proof of stake, we have the ability to actually put independent individual knobs on each and every one of the participants in the system. This is huge. It gives us an additional multiple levels, multiple dimensions of control over the system. It also makes the problem much more difficult. That's why there are multiple Casper variants. That's why it's actually hard to figure out exactly what you should do because the design space is so big. And so if you could conquer the design challenge, if you could come up with a protocol whose properties you can prove, then the potential for having a secure protocol is much higher for proof of stake than for proof of work. Yeah, I think as was mentioned previously with proof of stake, the distribution of control and of stakers can be much wider. So anyone that has any sort of use of the network can submit a stake and in a way be invested in the security of the network. And there's much less overhead to hold any stake in the network rather than having to buy a miner and mine on the network or buying a share in a mining company. It's a much more direct connection to the protocol itself. But then at the same time, I think another problem that can arise is if we have many participants in the consensus, it becomes very expensive to run the consensus process amongst many participants. So a lot of systems have something like a delegated proof of stake where only a few parties actually participate in consensus while everyone is kind of just delegating to someone else and that can potentially lead to issues with very few parties kind of having a lot of control over the network. So I think an important thing that we'll need to think about is how to kind of manage this balance between distributing the power to every single person that wants to have it and maintaining security or being able to run this consensus in an efficient way. Just a really quick add-on. Even if the concentration of proof of stake were the same, the fact that there's lower barriers to entry in proof of stake will mean that it behaves in a more competitive way. So even though I totally agree, I think like stake is gonna be less concentrated than work, it's gonna be more competitive even if it's the same concentration. Because of the liquidity. Because of the lower barriers to entry. Yeah, lower mark. Yes, I really like your point. And let's take a question from the audience. Cool. Is there concern for link rot with Casper implementation? What do you mean by link rot? Link rot, like something maybe is staked and validated and placed in the chain, but the chain that it's representing is no longer exists after some time. Do you mean by exist, do you mean can no longer be downloaded? Or do you mean? Right, yeah. Like some of the people who were part of this chain that was then used to be placed in the main chain, that doesn't exist anymore or the user's offline? You seem like you're describing a scheme that involves multiple chains. So is this like a question about sharding? Yeah, I think so. Yeah, okay. Just to kind of fast forward ahead about one hour before we fast rewind back to here, I guess the answer is that there definitely is, are in general worries that if the system gets sharded to the point where a very small number of nodes end up storing each individual piece of data, then it is totally possible that some portions of the state could just end up getting lost. Now there are ways to make the protocol more capable of surviving that and there are also ways of mitigating that and there are also ways of incentivizing storage and there are, you could also just basically limit the sharding coefficient so that you still have, in electrically speaking, like a few hundred people storing each piece of data so that it's safe enough and this is a very active area of research. Thank you. Okay, we can take maybe one more question from the audience and hopefully I got to ask my last question. Okay, here's a question sort of on the economic mechanics of the between proof of work and proof of stake. So I mean obviously right now proof of work in Bitcoin for example or Ethereum is useless outside of Bitcoin and Ethereum. I mean, you're just doing a bunch of hashes or computing a hash over the DAG but say if you had a proof of work, like if you could somehow construct a proof of work that wasn't pre-minable, that solved an interesting problem like protein folding or traveling salesman, then I think in proof of work you have this nice benefit that nicer benefit that you kind of lose in proof of stake that you can say, look, here we have Moore's Law and maximum processing power, sorry, the maximum manufacturing capacity in the world and say in the next year the hash rate could only grow up to this point, right? And if the hash rate grows beyond that point then if the price remains the same and if the hash rate is beyond that point, means that the growth of the hash chains power has exceeded the Moore's Law curve, so there has been more economic interest in the chain. And you can actually have a total upper bound like if 100% of all new processors end up being used for the proof of work, obviously you can't have that because that makes the rest of society invaluable and therefore, I mean, there's a good balance there, there's a parity balance there that you can compute and in proof of stake, this isn't the case because the proof is, yeah, like say that we have, okay, 10% of either can be staked on any one time or something like that. The security of the chain from one one to the other really depends on the chain's total value over the ratio of that over world's purchasing power parity, the world's GDP. So if the world's GDP is way high and the chain becomes much more valuable than because it's like the price is really a lagging indicator of that, somebody could come in and end up having a 51% attack on the chain or have disproportionate stake in the chain because it would be valuable for them and if they do, either way like also if they do buy a lot of stake to do a 51% attack they could short, right? Right before they do the attack which makes it cheaper or perhaps worthwhile. So how do you sort of balance, how do you see during the mechanism of the science so that you balance these incentives like you make it expensive to short right before you do an attack and keep more predictable bounds on the amount of how the consensus behavior can change from one one to the next? So let me tackle the first part of this question which had to do with useful proofs of work. So that's an idea that comes up over and over again. I've seen it in many different forms. So first of all, coin designers of different kinds typically shun this idea for a very simple reason. The main reason they cite is if it was the case that minting Bitcoin corresponded to doing 50,000 protein folds and we know the current cost of doing one protein fold on Amazon, Turk or whatever. That would anchor the price of a Bitcoin and people don't want that. They want it to go to the moon and if it is the case that a protein fold is five cents then you are stuck with a Bitcoin price being five times 50,000, five cents times 50,000 at least in people's minds and that would create some kind of a price barrier. So you see coin designers shunning this idea. In response, academics have looked at this and they've come up with different ideas for using useful proofs of useful work, using things like secure hardware to show that this person actually dedicated their CPU to doing something of importance to themselves and it's up to you what that was. And there is some interesting work coming out of academia on that front. So that maybe goes towards addressing that issue. The second part of your question I think had to do with what happens when somebody is shorting and attacking and analyzing those kinds of situations is always difficult, but what can one do, right? Essentially we can reason about coins in the system and we can model the amount of damage one can do to the external value of the coin as well. And we haven't seen much analysis of that kind but I hope that moving forward we will see people take that into account because the coins, their number doesn't matter. What matters is the external market value. In terms of shorting, if the value is going away completely then you will not be able to close your short anyways. But in terms of purchasing power, this is also something that in general grows and we could argue that it might be growing at a similar pace to advances in computing hardware. Okay, let's take one more question from the audience. Okay, just a quick question about economic finality. I mean, you mentioned that an attacker might buy coins, might e-fer and launch an attack and then the community might realize it didn't attack and slash him and ArtFork and so on. But do you think that commitment to attack, yeah? I said attack in the first step, I redo my attack, I do it again, I do it again, I do it again. Some commitment to attack might be community destroying and also might, for example, incentivize people to sell that coin and leave the protocol? Well, no, because as I said, basically I believe that if we can make Casper work well enough then attacks will make the ether price go up and so a commitment to attack will incentivize people to buy, buy, buy. That basically, in my view, is kind of the sort of crypto economic holy grail. I mean, I agree with B here. I think actually the question you're asking is if someone commits hard enough to attack you, can they demoralize everyone and have them no longer believe the story that we will just sell the coins to the attacker at higher and higher prices, right? And the question in like whether people will be demoralized or not is kind of really about, there are kind of states of knowledge and confidence in the kind of belief that you can't conduct this attack over and over again if it costs you like a significant amount every time. And I think if people really believe that, then it shouldn't make you lose confidence. You should know that, okay, this is only gonna happen at finite and small number of times. And therefore you won't just dump your coins in response to the fear of a persistent attack. So basically, you know, can someone commit to attacking the protocol and therefore demoralize the Ethereum community to the point where the price goes so low that like the security goes to zero? I mean, in theory, but I don't think it's very likely. Okay, we have time for one last question. Hey, my understanding with Casper is that if I have been offline for a while and then I come back online, I need the Genesis block, a fork choice rule and the current validator set. So I was wondering if you could talk a bit about sort of the attack surface of that validator set being subjective and if there's ways to mitigate that with Casper when we switch over. But you only have 55 seconds. Yeah, you don't use the Genesis block, you use something more recent. Yeah, the idea is that there is this kind of like a withdrawal, a revert limit. And so there might be some period, like say four months and as long as you log in within that period of time, then you would authenticate the current state based on the information that you know, given the previous state that you were aware of at the last time when you logged off. So if you log off for a too long period of time, then it does become possible to make attacks where those attacks are not paid. But we are, those attacks can happen without the attack or paying penalties. But if you log on more frequently, then that does not happen. Okay, all right. So let's thank the panelists for an excellent panel and thank you very much for your participation. Thank you very much. Our next speaker is Jason Toich introducing the TruBit virtual machine. Thank you panelists.