 My name is Dream1, and today I'm going to introduce an expected constant-run Byzantine broadcast protocol that works even under design as majority. This is Dream2work with Hanschen, Elaine, and Srinni. Byzantine broadcast is proposed by Lamport et al. in 1982. We have a set of users who wants to reach consensus. One of them is the sender who has an input bit. We say that consistency is satisfied if all honest users output the same bit. And similarly for validity, we want to make sure that when the sender is honest, all honest users output the sender's input bit. Under a synchronous setting, Dolop and Strong have shown that there cannot be any deterministic protocol that achieved Byzantine broadcasts in less than half-plus-one-runs, where half is the number of corrupted users. But if we consider randomized protocols and the expected rock complexity, then there were some beautiful reasons. Under honest majority, expected constant-run Byzantine broadcast protocol exists that works even under a strongly-adapted adversary. But for dishonest majority, the progress has been rather slow. Gary et al. first proposed a solution with TF-N square runs. This is later improved by Face et al. to TF-N. However, both of this solution is interesting only when F is very close to half-N. Even when F is like 51% of the population, then their run complexity is linear. A huge open problem is whether sublinear run complexity protocol exists until recently when Chin et al. proposed a polylog solution. This is a huge breakthrough. However, it still doesn't match the expected constant-runs protocol we have seen under honest majority. So in this paper, our results proposed a Byzantine broadcast protocol with N divided by the number of honest user, square, number of runs. Our protocol works under a synchronous setting, assuming trapstick cryptographic setup. We consider a weakly-adapted adversary. This means that the adversary can corrupt any user at any time. However, it cannot erase message already sent in the run of corruption. To achieve these results, a lot of previous tricks like selecting a committee, which is used in Chin et al's work, or setting a threshold, which is commonly used in honest majority, no longer works. So you have to build new techniques from ground up. One of the technique that we use is called the truss graph. As the name implies, we use the truss graph to indicate the truss relationship between different users. In the truss graph, the vertices is the set of users, and an edge U to V exists if and only if U and V mutually trust each other. In other words, they think of each other as honest users. So of course, users will have inconsistent view of whether users trust each other. So we allow users to maintain its own truss graph. And an edge U to V belongs to user W's truss graph, if and only if W thinks that U and V trust each other. So there's some challenge right here. The first one is not all misbehavior leave behind cryptographic evidence. If a user equivocates by sending two contradicting signatures, then the two signatures are proof that this user is corrupt. But let's say if a user U accused another user V of just not sending any message. Well, there are two possibilities right here. The first one is that U is corrupt, who intentionally dropped this message and tried to frame V for it. The second case is that U is an honest user, and he's right. V is corrupt and he's not sending any message. So in these examples, we cannot tell which of the user is corrupt just from the accusation. But the observation here is that we can observe that at least one of U and V must be corrupt. So in some sense, our truss graph is storing this type of accusation. And when enough accusations are gathered, then hopefully we can tell whether a user is corrupt or not. We allow a user to complain about another user without providing evidence. In some sense, the complaint itself is an evidence that at least one of U and V is corrupt. So one important property we want the truss graph to maintain is that honest users never complain about each other, and they should always be fully connected in anyone's truss graph. To achieve this, we do not allow any users to express distrust about an edge V to W that does not involve itself. We also make sure that when we actually implement the protocol using the truss graph, honest users never complain about each other. The second challenge that we face is that users does not have consistent view on the truss graph. In fact, making users share the same truss graph is as hard as solving bugs in broadcast itself. Our solution to this is to let honest users always share their knowledge to others. Specifically, if an honest user receives a distrust message or a distrust evidence in wrong R, it will always relay it to all other honest users in the next round. This guarantees that for any honest notes U and V, use truss graph in wrong R plus 1 is always a subgraph of V in wrong R. So given the construction of this truss graph, let's see how we can use it to actually detect and constrain corrupt users. So first of all, we know that if a user is complained by more than F plus 1 other users, then that means at least one of the complaint must come from an honest user. Therefore, this user must be corrupt and we can just remove it from the truss graph. In fact, we can even have a stronger check condition. We know that honest users always remain fully connected. In other words, they are in a clique of size n minus F. So if a node is not in any clique of size n minus F, then it must be corrupt. Unfortunately, checking whether a node is in any clique of certain size, this is NP-hard. So we have to relax this condition a little bit. But luckily we find something that's equivalently strong, but it's easier to check. Since honest users are always in a clique of size H, then for any two honest notes, we look at their neighbor sets and the intersection of the two neighbor sets must contain at least H elements. Therefore, for any two nodes U and V, we can just check whether the neighbor set of U intersect the neighbor set of V has less than H elements. If so, this implies at least one of U and V must be corrupt. So throughout the protocol, we maintain the truss graph by checking three conditions. We remove an edge if one of the nodes complains about the other. We remove an edge if they fail to match the neighbor set check. Finally, we remove a node if the degree of that node is less than H minus 1. The important result here is that if we maintain especially the last two check, then the diameter of any honest user's truss graph is upper bounded by approximately 2n divided by H. This is a constant number when H is linearly in. This is important because in the later Byzantine broadcast protocol, we relate the rank complexity with the diameter of the truss graph. In conclusion, our truss graph maintenance scheme satisfied three important properties. Firstly, all honest users are always fully connected. Secondly, honest users truss graph must have a small diameter. And finally, any honest truss graph in run R plus 1 must be a sub-graph of any honest truss graph in run R. Now let's see how we can use this truss graph, which comes to our second new technique, which is presenting a truss-cast protocol. The truss-cast protocol is a weaker primitive of consensus. It's similar to reliable broadcast or gridcast, where it doesn't achieve consensus itself, but we can bootstrap consensus from these weaker primitives. On the other side, truss-cast is really different from existing primitives like gridcast or reliable broadcast, because it is strongly related to the truss graph notation that we just talked about. So in truss-cast, we have a sender who wants to send a message to all other users. And we make sure that if an honest user has not received from the sender by the end of the truss-cast protocol, then the sender must be removed from used truss graph. We would like to stress that there's a difference between distrusting a node or removing the node from its truss graph. So in the first case, let's say if a user U distrusts or complain about another node V. This means that U knows that V is corrupt, but maybe U doesn't have the proof to prove to others that V is corrupt. So the only thing that you can do is, you know, complaining about V so that other users knows that at least one of U and V must be corrupt. So in the second case, if a user U removes V from its truss graph, this means that U has solid proof of V being corrupt. This might be a set of F plus 1 complaints about V, right? So this is why our truss-cast protocol is really strong, because if an honest user remove U from his truss graph, then that means he has solid evidence of U being corrupt and he can send this evidence to other users to convince them that U are corrupt. To achieve this, we'll show that if an honest user does not receive from the sender by the i-thrown, then the distance between U and the sender on U's truss graph must be larger than i. So this property, when combined with the fact that the diameter of any honest user's truss graph must be at most d, which is approximately 2n divided by h, then this implies that at the end of the d-thrown, if an honest user still hasn't received anything from the sender, then the distance between U and the sender must be larger than the diameter here, which is impossible. So the sender must be removed from the user's truss graph. We have talked about the properties of truss-cast and what we want from it. Now let's see how to make it happen. The core idea is actually very simple. If an honest user does not receive from the sender by the i-thrown, then U is going to distrust any user V such that the distance between V and the sender on U's truss graph is less than i. So after this step, all of U's neighbor will be at least distance i away from the sender. Therefore, the distance between U and the sender must be larger than i. To illustrate why this works, let me introduce an example. In our example, we have a sender S who wants to share a message with A, B, C, D for users. We consider the perspective of an honest user A. At the beginning, A assumes everyone trusts each other, so the truss-graph of A is a complete graph. So in round one, A does not receive any message from the sender. Well, from A's perspective, we know that the sender must be corrupt because otherwise he should send the message M to A. In fact, this is exactly what our protocol did. By our protocol, A is going to distrust any user V such that the distance between S and V is equal to the round number minus 1. In this case, it's 0. So A will just distrust the sender S. Now let's look at the distance between A and S on A's truss graph. Right now, it's equal to 2, right? So this matches the property we want, which is if A does not receive from the sender by the i-th round, then the distance between A and the sender will be at least i plus 1. In the second round, A does not receive anything from B or C, but it did receive a complaint from D saying that the sender didn't send anything in the first round. So our protocol is going to deal with this complaint message first. To deal with it, A will remove the edge S to D because apparently D no longer trusts the sender S. Notice that A still hasn't received any message from the sender at this point. So by our protocol, A will distrust any user V such that the distance between the sender and V is equal to 1. We look at the graph here. This means that A will distrust the node B and the node C. So after removing those two edges, the A's truss graph will look like this. This part is very counterintuitive because as we mentioned before, an honest user only wants to distrust corrupt users. We don't want honest users to distrust each other. But in this case, like B and C didn't do anything that didn't send equivocation, right? They just don't send any message. In this case, how can A be certain that B and C are corrupt users? Well, there are two possibilities right here. Think about what happens in round one. If B or C have received from the sender, then if they're following the protocol, they should relay the message to A, right? If B and C have not received from the sender, then they would complain that the sender didn't send any message, like what D has done. So in either of these cases, they should at least send something to A. Therefore, if B and C didn't send anything in round two, then they must be a corrupt user. This is the details of our trusscast protocol, whereas you can see it is very short. The most important intuition is just as what we described, but there are some more details to it. Well, for example, the trusscast protocol also takes as input a verify function. This verify function will check whether the sender's message is valid. When we actually run the Byzantine broadcast protocol, we're going to call this trusscast protocol with different verify function, but I don't want to get to the details. If you're interested in the details of this trusscast protocol and why it works, please check out our paper online. In conclusion, the trusscast protocol satisfies that by the end of the trusscast protocol, an honest user either received a message from the sender or removed the sender from its trussgraph. The round complexity of this trusscast protocol is the same as the upper bound for the diameter of the trussgraph, which is 2n divided by h approximately. This is constant, as long as the number of honest user, h, is linear in n. Finally, let's see how we get Byzantine broadcast from trusscast. On the high level, we basically use the classic proposed vote and commit structure. This structure is used in a lot of previous work. For example, Abraham Atoll's work used it. Every epoch, we're going to elect a new leader, and in the proposed step, the leader will use the trusscast protocol to share its proposal, and the users will trusscast their votes, which is basically just relaying the leader's proposal. Finally, if you get enough votes on a certain message, you construct what we call a commit evidence, and the user will trusscast this commit evidence. There are a lot of details here, as we have to design different verification functions for a different step of the epoch. Without getting into the details, let's just briefly talk about why the trussgraph or the trusscast protocol matters in this protocol. First of all, the Byzantine broadcast protocol is designed such that users only care about reaching an agreement with other nodes on my trussgraph. In this case, in order for the corrupt users to have an impact on other honest users, they will try to remain an honest user's trussgraph, and to do so, they must have sent something, which is guaranteed by the trusscast protocol. The second reason is that the trussgraph monotonistic property really helps. We basically tie the proposal, the votes, and the commit proof to be related to the trussgraph, and the trussgraph monotonistic property help us show that if any of the proposal votes were commit proof, as valid to a certain honest user in some wrong R, then it will always be valid to all other honest users in all later rounds. And if we work around with this property, then we can show that if an honest user commits, then future leaders cannot propose a different bit. This means that all other honest users will eventually receive the commit evidence and always follow the honest user's decision. In conclusion, our Byzantine broadcast protocol uses three trusscast protocol in each epoch, and that would be theta n divided by h number of fronts per epoch. Again, recall that a is the total number of users, and h is the number of honest users. We terminate whenever we have an honest leader. This will happen in expected n divided by h number of epochs. So the final round complexity is just theta n square divided by h square in expectation. This is constant as long as the number of honest user h is linear in n. So finally, let us discuss why our Byzantine broadcast protocol will also work for an adaptive adversary, specifically a weakly adapted adversary. It is worth pointing out that all of the things we have discussed works only for a static adversary, because we need to select a leader. An adapted adversary can just corrupt the leader at the beginning of each epoch. In this way, we will need f plus 1 epochs before we can encounter an honest user, or an honest leader, which is basically a disaster. Our solution has two parts. First, we have postponed the leader election. We basically let everyone pretend to be the leader and just share their own proposal at the beginning of each epoch. In this way, adversaries cannot figure out who is the leader until all the proposal has been delivered. The second part is to make proposal affordable even after the leader has been corrupted. The details are complicated here, and I don't want to go into the details, but basically we're going to modify our trust guess protocol to make user also acknowledge the message back. In this way, a proposal needs to contain the signature of all the honest user as well. In this case, it will be unforgeable even after the leader has been corrupted. In conclusion, we create a budget in broadcast protocol that works under dishonest majority and a weakly adapted adversary. The wrong complexity is expected constant, and the communication complexity for the entire system is n to the fourth. In fact, a parallel work also showed that it is possible to achieve polylog wrong complexity under strongly adapted adversaries. However, the question remains whether it's possible to achieve expected constant wrong complexity under a strongly adapted adversary. This problem appears really challenging, and we hope someone has solved this in the future. Finally, we would like to thank Zach, Lin, and the anonymous GCC reviewers for providing so many helpful suggestions. We would also like to thank our awesome shepherd, Ren, for making this paper so much better. We would also like to thank you for being interested in our work, and thank you for listening.