 Our next speaker is Neil Gehry-Hara from the University of California, Berkeley, who will be discussing No Commit Proofs, Defeating Live Lock in BFT. Neil? Hello, my name is Neil Gehry-Hara and I'm a second-year PhD student at UC Berkeley, advised by Natasha Crooks. Today, I'm going to be talking about BFT and specifically on how we achieve minimal latency, linear authenticator complexity, and optimistic responsiveness, and partially synchronous BFT protocols using a construct we call No Commit Proofs. So there's a lot to get to, but I promise I'll do my best to keep you awake for the next 15 minutes. This is joint work with Heidi Howard, Itai Abraham, Natasha Crooks, and Alain Kamiski. So let's talk about the talk. So our work has two main contributions, the No Commit Proofs and a new BFT consensus protocol called WENDY, which is enabled by these No Commit Proofs. In this talk, we're going to focus only on the No Commit Proof construction, but more details about WENDY can be found in the full paper. So the setting I'll be focusing on is the same as a bunch of the popular leader-based BFT protocols such as PBFT and HASA. Namely, we assume that there are three F plus one replicas, partially synchronous network with the worst case network delay parameter called Delta, and standard cryptographic assumptions which are used for digital signatures. So there are three magic properties that previous partially synchronous BFT protocols fail to accomplish simultaneously that we're going to try to get here. The first is latency. So latency is generally undervalue when compared to throughput, but it's just as important. Namely, it's been shown by companies like Amazon and Google that an extra 100 milliseconds of latency can actually cost millions of dollars. Additionally, for applications like a transactional database, which cannot have big batches, higher latency can actually increase transaction attention leading to a higher abort rate and thus lower throughput. So minimal latency for our setting is going to be two rounds of communication between the leader and the other replicas. The next magic property we're going to strive for is linear authenticator complexity. By authenticator complexity, we mean the number of signatures are max that are received as part of the protocol. So BFT protocols are notorious for having quadratic or even cubic authenticator complexity, making them extremely expensive in practice. For newer blockchains that are run with hundreds of replicas, this makes linearity especially important, especially for scaling to larger values of F. Finally, we have optimistic responsiveness, which is a property that commit latency should depend only on the actual network delays rather than by Delta, which is the worst case delay parameter. This is especially important for wide area settings where higher tail latency forces system designers to pick higher values of Delta. So this is especially important property in that setting. Now I'm going to go briefly on how most of these partially synchronous BFT protocols work. So at first, we have a prepare phase, which guards against equivocation. Namely, the prepare phase ensures that a Byzantine leader cannot get multiple values committed in the same slot. Then next, we have this commit phase, which persists to committed value, so that future leaders remember earlier decisions. So this feeds into the view change. So when there's a lack of progress, and old leaders replace with a new leader, and this new leader must preserve any committed decisions from earlier leaders. This sort of feeds back, and so back to the prepare phase, which repeats the cycle all over again. So we're really going to focus on the view change blocks, since that's really the source of complexity for all these protocols. The view change is notorious because it must preserve all these earlier committed decisions by old leaders. And furthermore, the view change is sort of the differentiating factor between which of the magic properties we're actually able to achieve. So let's start with how other BFT view changes work. So for PBFT and SPFT style view change, each replica sends a quorum certificate, also known as a QC, to the new leader. And a QC indicates that a proposal could have been committed in a particular view. And it contains an authenticator that indicates a quorum of replicas voted for a particular proposal. And so in this example, you can see we have three different QCs for values U, V, and W, for views one, two, and three, respectively. So now upon a new leader receiving a QC from a quorum of replicas, it sends all the QCs that are receiving the view change to all the other replicas. And so having to send all these QCs sort of ties the hand of its leader as it cannot hide any QC that was committed by a particular replica. However, the problem here is that this view change fails to maintain linearity since the leader has to send a linear number of authenticators to a linear number of replicas. So next up, we're going to talk about the Tendermint and Casper view change. And as before, each replica sends the QC that it has to the new leader. However, this time, instead of the new leader just waiting for a quorum of messages from the replicas, it wastes this additional delta in order to ensure that it receives QCs from all the honest replicas. And so this is different from before because before we were just waiting for a quorum of responses, but now the leader has to wait this additional delta in order to guarantee all of the honest replicas will be included. With this view change though, we're waiting delta to hear from all the honest replicas. Therefore, if the leader tries to send an old QC with a lower view, then we can sort of detect that the leader is malicious and sort of reject this proposal. So in other words, the replicas can actually catch the leader if it tries sending a lower QC. So this view change unfortunately lacks optimistic responsiveness since the leader has to wait this delta bound and it does therefore does not progress at the speed of the actual network latency. And so finally, we have the hot step to change. And so here, each replica sends the QC, which it has, that things could be committed to the new leader. And so like PBFT, we're actually only waiting for a quorum of responses from replicas. We're not waiting this additional delta. But sort of as in the Casper and Tenement case, replicas catch a potentially lying leader by rejecting a QC if it's for a different value in a lower view. But because the leader actually does not wait delta, it's not guaranteed to hear from other honest replicas. And so a replica will be able to catch a malicious leader. However, sometimes it'll be too conservative and actually reject a proposal thinking the leader was malicious when in fact it was not. And so it could just be the case that it didn't receive all the QCs from the honest replicas. So this is great. Hot Stuff achieves optimistic responsiveness and linearity. But are we done now? Well, unfortunately for you and luckily for me, the answer is no. So the problem with the Hot Stuff view change is that we actually have to add additional phase in order for the protocol to be live. And the reason for this is, because of these potential false positives when guarding against malicious leaders. So can we actually do better? Well, Hot Stuff came ever so close but the false positives meant that an extra phase was necessary. Let's take a step back and sort of summarize the background. So we pretty much have two main approaches to work with. So the first approach is that all QCs are sent which allows for a latency of two phases but this incurs quadratic costs. And sort of the other approach is that we sort of send just the highest QC, but this requires three phases, but it's linear. So the key question we ask is whether we can actually figure out a way to encode all the QCs in such a way that we only use one authenticator thereby getting linearity. And sort of this is precisely the question that the no commit proof answers. So now I'm gonna sort of show this no commit proof construct which is sort of able to summarize the information from a PBFT style view change just using one authenticator. And this information allows the replica to know whether the leader was being malicious when saying it's proposal. And they're called no commit proofs because it's sort of a proof that a replica QC could not have been committed. And to do this, we're gonna sort of rely on two main observations. So our first observation here is that because the prepare phase guards against equivocation, we actually really don't care about a particular value QC. In this case, U or V or W. Since there can be at most one QC that forms in a given view, the view number itself uniquely identifies the QC. And so for instance, there can't be QCs for both U and U prime to form in view one. So now we can just focus on the view of the QC rather than the particular value. And sort of our second observation is that replicas are sending view change message to be in a new view. And then this new view that they're going to enter is gonna be common among them. So in this case, in this example, we have view changes all for going to view five. So before I show you the actual no commit proof construction, I'm gonna show some strawman approaches. So let's look at sort of a naive way to try to construct these no commit proofs. So here in this example, we're gonna denote each QC using a different color for clarity. And so for this strawman, we're gonna assign each QC view number getting sigma one, sigma two through sigma four. And we're gonna try to aggregate these signatures into one signature. And so the question here is whether we can actually aggregate sigma one through sigma four into one signature easily, maybe using something like an aggregator. So you're probably thinking, let's use multi signatures or better yet aggregate signatures. There are unfortunately some obstacles we run into. See multi signatures require the same message to be signed, which we don't have in this case. And aggregate signatures work, but they're expensive since they require a linear number of pairings for verification. So can we actually do better? The answer is yes. And to do so, we're gonna try to simulate the functionality of an aggregate signature scheme, which works on signatures for different messages, but use multi signatures under the hood for increased efficiency. So let's kind of see how we're gonna do this with straw man too. So now I'm gonna show you the straw man of how to get around, having this problem, having signatures on distinct messages. And so, namely each replica is going to generate a secret key for every possible view number the QC could have. And we're gonna use observation two to our advantage. Namely, we will sign the common view we are entering. So in this case, five using the corresponding secret key we generated before. So we have here that into encode a QC for view one example, we'll use the blue secret key. For QC for view two, we'll use the green secret key and so on and so forth. And so because we're signing the same message, this common view five, we can actually use BLS multi signature aggregation. And so this results in a single signature sigma that's an aggregation of all these signatures. And so we are still able to encode the view number of the QC through the particular usage of the secret key. So this is all good, right? Like, what's the issue? Well, the problem here is that the QC view numbers can be indefinitely large. And so we cannot generate these keys on the fly because we assume there's some sort of bootstrapping setup phase at the beginning in which public keys are exchanged and verified. And so the scheme is not practical because we need to generate and exchange an infinite number of keys in the setup phase. So now let's kind of get through to the no commit proof construction. So we're gonna show how to bound the number of keys regenerate. And so the idea here is that instead of encoding the view of the QC itself, we're going to encode the difference between the common view that the replicas are entering and VI, the view of the QC itself. So for example, suppose the replicas are entering view 2000. So we're gonna have V equals 2000. We're gonna try to code a QC with view 1,999. So VI equals 1,999. And we're going to take the difference. And so this blue key is going to encode the difference which is one. But if we were to use the strawman from before, we'd have had to encode 1,999 which would require generating like an exchange and a whole bunch of more of public keys at the beginning. So kind of reverting back to our initial example using the view difference encoding results sort of in the following differences four, three, two and one, respectively. And so that's kind of like sort of the crux of the no commit proof construction. There are gonna be more details in the paper such as how we can further optimize the number of keys we have to generate by converting the difference to binary, how we guard against stale QCs and additional security proofs. And so now I'm gonna briefly talk about Wendy our new consensus protocol. So using these no commit proofs, we actually construct a new BFT protocol called Wendy which achieves all of these three magic properties. And so the cool thing about these no commit proofs is that they're kind of only needed when we have sort of the false positive situation we talked about earlier. And again, we can find more details in the paper. So now I'm gonna briefly talk about the evaluation. So we compare our no commit proof scheme with the state of the art aggregate signature scheme BGLS which can combine signatures on different messages. And so from our graph here, we show that our scheme performs much better because the verification time, because we're using multi signatures under the hood on the same message, it takes only two pairings which is typically the bottleneck here. And so for higher values of the F this difference and constant versus linear pairings is much more pronounced. And so I wasn't lying here when I'm telling you that these aggregate signature schemes are very costly. And so for F equals 64, you can see our scheme takes three milliseconds versus 127. And sort of a brief discussion about our Wendy protocol here's a latency throughput graph in the wide area that shows by leveraging these no commit proofs we can actually achieve about 33% lower latency with comparable throughput. And again, more details are in the paper. This is in a wide area setting with batch size of 400 as sort of a quick example here. And so to wrap up, I just wanna sort of conclude that we came up with this no commit proof construct which summarizes the information in a PBFT view change using just one authenticator. The scheme is much cheaper than the BGLS aggregate signature scheme we compare against since we leverage multi signatures. And we then use this sort of gadget to construct a new BFT consensus protocol. That's very similar to hot stuff that achieves the sort of three major properties that we care about. And yeah, you can check out the paper for more details in this QR code.