 Okay, thank you very much. Hi everyone, my name is Fuya Guo. I am a PhD student from Cornell University. Today I'm going to talk about our work on rethinking the classical synchronous model in distributed consensus and our proposal of a refinement of synchrony to make it more robust and partition tolerant. In this talk, I will first give an example of protocols that work in synchronous network and then describe a scenario in which with this protocol, a benign failure can cause real-world financial loss to users to show why we think the classical synchronous network assumption is not suitable for practical use cases. To fix the problem, I will then give our proposal of the new model with better robustness and partition tolerance and then give a fix to the protocol to make it work in a new model. Oh, sorry. Okay, so before talking about any specific protocols, let me first quickly recap the consensus goal we want to achieve. All of these public blockchain protocols like Bitcoin, Ethereum, or those PBFT style protocols, they want to reach the state measure application notion or say they want to have honest notes agree on the linearly ordered log. To be more specific, we want to have both consistency and liveness. But consistency would mean that all of the honest notes should agree on the same log and by liveness would mean that any transaction that appeared in the network should be confirmed within short time. So now let's take a look at a very simple voting protocol that work in synchronous network. The protocol run with N nodes, one of them is a proposer and the other nodes here are the superheroes of voters. Some of these nodes can be crapped, like your locusts are crapped. And when some transaction comes, the proposer first signs the transaction with a sequence number and proposes this transaction to the voters. Unseen the proposal, the voters will vote for this proposal. The important thing here is the honest node will only vote for one transaction for each sequence number. And on collecting enough number of votes for the same transaction, the proposer used this votes to form a notarization for the transaction and send the notarization back to the voters. Unseen the notarization of a transaction, node thinks the transaction is confirmed. So how many votes are enough to confirm a transaction? If we assume that less than one half of voters are crapped, then we need to collect at least three quarters of votes to confirm a transaction. The consistency proof is just a simple pigeonhole principle map and I will not go to the details of it, but it only relies on honest majority assumption. However, because now we need to wait for three quarters of votes, honest majority assumption is not enough for liveness. If more than one quarter of nodes are crapped, or if the proposer is crapped and they just keep silence, the protocol can stuck. So the challenge here is how to achieve liveness with just honest majority assumption. Here we make use of a slow chain. We use this voting process as fast path and if something bad happens, the fast path stops, people will detect it and fall back to a slow chain which can guarantee both consistency and liveness with honest majority assumption such as Bitcoin or Ethereum. When falling back, one of the challenges that different node might have different view of the fast path log. So here people need to reach agreement on the fast path log on the slow chain so they are required to post all of their confirmed transaction onto the slow chain. After they reaching the agreement, they can always reelect the proposer or the voters to restart the fast path. So now we have a protocol with responsiveness. With honest majority assumption, now we can guarantee both consistency and liveness because we rely on a slow chain. And if the condition is good enough, we can actually have a protocol that can confirm a transaction with one round of voting. However, as I said, sorry, actually the protocol I described here is a simplified version of Cinderella by Raphael Passand-Leonchi published in Eurogroup then 2018. It is a peer-reviewed consensus protocol with rigorous mathematical proofs. However, as I said before, I am going to describe a scenario in which with this protocol, a confirmed transaction can somehow be undone by the network, even if none of the node in the network is malicious. So in this scenario, a transaction comes, the proposer proposes it and voters vote for it forms notarization and send notarization back to voters. But right after sending the notarization to Coinbase node, the proposer drops offline. So at this time point, the Coinbase node becomes the only one who sees notarization. The Coinbase node thinks this transaction is confirmed and it tells user saying that, hey, this transaction is confirmed, someone want to buy a Ferrari car. And the user think, okay, I received the money, so I deliver my car. But unfortunately, right after telling the user about the confirmation, the Coinbase node also drops offline. And now, all of the other nodes detect that the fast pass stopped for a long time, so they decide to fall back to slow chain. As mentioned before, when falling back, every node are supposed to post all of their confirmed transaction onto a slow chain. And the Coinbase node is also supposed to do so, but because it is offline, it fails to post this transaction. After a while, when the Coinbase node comes back online, all of the other nodes have already re-bootstrapped a new fast path. And that transaction is not included in anyone else's log. So in this case, the transaction is actually being undone by the rest of network and the user lose both car and money. It sounds horrible, right? What I think has been confirmed can be undone by the network, even if I didn't do anything wrong, or perhaps I don't even know that I am offline because I'm de-dust. So how can a provably secure protocol allow this kind of faulty behavior? That is because the protocol use synchronous assumption. So what is assumed in a synchronous network? In a synchronous network, the messages between honest nodes are required to be delivered within delta rounds. If any node fails to satisfy these requirements, itself is considered dishonest and the protocol never promises any consensus for dishonest node. But actually this assumption is not practical in real world, especially if you want your protocol to run for a long time. Like in 10 years, even Google or Facebook server can fail several times. So how can we fix this problem? This concern is not a new concern and actually people have solution for it, which is another timing model has been studied for more than 30 years, partial synchrony. So in partial synchronous network, the message delay between the honest node can be arbitrary long. Or actually there's no upper bound assumption on the message delay. So the short term outage of coin based node will just be considered as a long message delay rather than the node itself is dishonest. So any protocol that's working partial synchronous network, inherently tolerate this kind of short term outage or say short term offline or say network partition. So why don't we just switch to a partial synchronous protocol? Because if we want to get this good partition tolerance property in partial synchronous network, we have to give up resilience. The impossibility result in partial synchronous network says that any protocol working partial synchrony cannot tolerate one third of corruption. On the contrary, there are well known practical and theoretical protocols in synchronous network can work with honest majority assumption. And furthermore, if you don't care about the running time or the message complexity, you can even have protocol in that in synchronous network that can tolerate arbitrary number of corruptions. So a natural question come up here as can we achieve the best of both worlds? Can we have both resilience and prediction tolerance? At first glance, the answer seems to be no, it's impossible. Because to circumvent the impossibility result in partial synchronous network, we have to critically rely on the timing assumption in synchronous network. But if we think about this statement a little bit more, the question actually becomes who says that the synchrony of a network must be a binary choice? Who says that a network must be either fully synchronous or not synchronous at all? Can we find some way to describe the network connection status more accurately and find some place in between to take advantage of both worlds? To be more specific about the question we want to answer, let's first take a look at the network in real world. So in your network, there are actually three kind of nodes. There are some green nodes, they are honest and their network connection is very good. The messages between them can be delivered within delta rounds. But there are also some gray nodes, they are also honest. They want to follow the protocol description but their network connection is unstable. The message delay for them can be long or they might even lose messages. And there are some corrupt nodes whose behavior can be arbitrary. So one important thing I want to point out here is that any honest node might actually at some time point experience unstable connection or long message delay and they can recover from the bad connection later to be online again. So the honest nodes can actually jump between the green and gray set. So let's take a look at the two classical time model in this graph. The classical synchronous network, the protocol only care about those nodes who are forever green. If any honest node that ever drops offline joined the gray set, they will be considered as corrupt node and no consensus is guaranteed for them. And the classical partial synchronous network, the protocol does not take advantage of the good connection between green nodes. So it cannot tolerate more than one third of corruption. So what we want to do is to actually build a new model to see if we can achieve consensus for both green and gray nodes while we still leverage the good connection between green nodes to get better resilience than one third. To build this new model, the first step is to actually quantify on the network connection status. So here we introduce a new notion to describe this green set. We call this set honest and online set. As I mentioned before, the green set can change from round to round. So here we use O with subscript R to denote the honest and online set in round R. So now we have a notation to describe it, but here we still need to be clear about what we actually mean by saying a node is online. When we are talking about a node is online or offline, we are actually talking about the ability of the node to send or receive the message within bounded time. So here comes our formal assumption about this honest and online set. If some node is online in round R and it multicasts some messages in this round, then this message will reach all of the nodes who are online in some round T for T larger than or equal to R plus delta as soon as they are online. So even if the first node drops offline right after it sends the message, the nodes online later are guaranteed to receive them within bounded time. So a natural question here is if we assume the honest and online set is large enough, what can we achieve? Can we have better results than in the classical synchronous and classical partial synchronous? So here comes our new model. If we assume, sorry, so the message delivery of the network is actually controlled by the adversary. So here we say an adversary respects Chi week synchrony if in every round, the size of the honest and online set is larger than flow of Chi times N plus one. I want to emphasize again that in this model we do not require any specific single node to be forever online. The node can come back online and offline. The only assumption is that at any time point this set is large enough. So what we can achieve with this new model? Our answer consists of majorly two parts. If Chi is too small, like if the honest and online set is not larger than half of the protocol, then it is impossible to have a protocol that can reach consensus for both green and gray set. The impossibility proof is intuitive. For the sake of contradiction, if we assume the existence of such a protocol, we can easily force two small partitions in the network to reach agreement on different output within bounded time. But on the other side, if Chi is large enough, larger than one half, then the answer is yes. And actually we can fix the protocol I described earlier to make it guarantee consensus for both green and gray nodes. Because one half is the best we can achieve, we call this property best possible position tolerance. So now I'm going to give a fix to the previous protocol to make it best possible position tolerance. That protocol consists of two routines, the fast path and the slow chain. To make the whole protocol position tolerant, we need to have both parts best possible position tolerant. The fix to the slow chain is a little bit more complicated. So as time is limited today, I'm not going to talk about the details. But the hell of idea is that it makes use of best possible position tolerant business agreement as a building block. Although I'm not going to the detail, but I want to point out here is that it is technically non-trivial, not only because we need to fix the construction, but also because we need to rethink how to define the security property in the new timing model. For example, right now we cannot ask an offline node to output something when it is still offline and without enough evidence. So I will leave the fix to the slow chain to the paper and here I will only talk about the easier part, how to fix the fast path. The fast path is one round of voting. So now let's assume that honest node will only vote for a new transaction after all of the previous transaction have been notarized. So if we think this is a notarized sequence, then instead of confirming all of the notarized transaction, now we chop off the last notarized transaction and confirm only on the prefix. And when falling back to the slow chain, we still post all of the notarized transaction onto the slow chain. So why does this simple fix work in fast path? The high level idea is that if we assume at any time point there are more than half of nodes that are honest and aligned, then this set of honest and aligned nodes must intersect with the column who vote for the last transaction. Because people will only vote for last transaction after they have seen the notarization for all of the prefix and they are aligned when they are falling back to the slow chain, the nodes in their intersection will guarantee that these transactions in the prefix will appear on the slow chain. So like if Coinbase knows these jobs are flying, but it has seen the notarization for the last transaction, it is guaranteed that all of previous transaction will be posted onto the slow chain by some other online nodes. So this is the fix to the fast path. And this is all I want to talk about the protocol. And finally at last I want to talk about two observations we found during we were investigating the classical synchronous model. The first one is that any protocol that is best possible position tolerant is definitely secure in classical synchronous model with honest majority assumption. But the reverse is not true. Actually all of the existing protocols we investigated are not best possible position tolerant. So our new notion or say this new property is actually a strict refinement of classical synchrony with better robustness and position tolerance. And the second thing is that the classical synchronous network can somehow be misleading. Because in classical synchronous network people attempt to think that the protocol that tolerates more corruptions like the protocol can work with dishonest majority. They are strictly more robust than the protocol that can work with less corruption like the protocol can only work with honest majority. But as I shown in the impossibility result if we want to tolerate dishonest majority we inherently give up the position tolerance property. So it is actually a trade off between these two properties rather than like one side is strictly better than the other one. So this is all I want to talk about our work today and the proof and the details of constructions, the proof and more results like the MPC protocol that work in this new timing model can be found in our paper. And thank you very much. We have time for questions. There are microphones on both aisles. Please step up to them to be heard. We appear to have no questions at this time. Let's thank the speaker again. Thank you.