 I'm going to shift it into Q&A. The first question is, how sensitive is the team to the BDF's AMACS? And as a consequence, how practical is it today? The follow-up comment is, intuitively, costless stimulation from genesis still won't be instantaneous or costless by how slow is slow enough. So in our analysis, we assume that the speed of the AMACS, it signifies that variability in the speed of the BDF computation. So we assume that all the BDFs have the same speed. But in practice, different BDFs have different speed, and it can degrade the security. We can consider that different factor in our analysis, and that will show up as the amplification factor of the adversarial truth will be much higher in that case. So right now, as the algorithm is, it's not practical enough to take into consideration this variability in BDF speeds. I wonder how you mentioned multi-leader protocols fair in order fairness context, especially if one uses hash-based partitions of clients' requests across leaders. Any insights? So in general, even if you have a completely leaderless protocol, or you choose some unpredictable random ordering of transactions, there's still the potential that the ordering is unfair. So to give a very brief, let's say, data point, if you are trying to submit a transaction to the top of the queue, if you just try to flood the network with a lot of transactions and then choose a random ordering from multiple leaders or from everyone even, with high probability this flooded transaction can occur earlier in the block, which can break fairness guarantees if that's what you're trying to optimize as an attacker. So even if you have multiple leaders just proposing, ordering, and combine them very nicely, you still don't get any nice fairness properties. So follow-up question in the same thread. When the number of nodes N increases from 5 to larger values, how is the latency expected to scale? I'm especially curious because of the use of snarks to generate the fair ordering proofs. Snarks could have large proving times if the underlying circuit increases in complexity. So that's a good point. So we actually did not implement the snarks. So as far as I'm aware, there's not a nice library to do any graph-based computation using snarks. And it is a fairly intense graph computation. So what we do instead is just send all of the orderings from all of the nodes to everyone in an N squared way, similar to basically the leader submits that. Not an N squared wave, but in a N times T way, where T is the number of transactions. So it's kind of similar to how hot stuff, instead of sending aggregated signatures, it concatenates the signatures together and sends them. So basically, we compare to the N squared version of hot stuff rather than the N version of hot stuff. And these are the numbers that we got for five nodes. And they should pretty similarly scale with hot stuff for more nodes as well. If and when we have much nicer general-purpose snarks, we're doing the kind of computation that we want, we can take a look at how more efficient that can be. How hard would it be to extend Wendy to non-PBFB protocols to reduce communication and validation overhead? Yeah, I mean, I think it really depends on the protocol. Sort of our sort of scheme kind of relies on like those two observations we made that there's sort of some common information that we can use and sort of what we're trying to encode is not too far. It's sort of a bounded value. But I can imagine, I guess, depending on the protocol, if there's some sort of common information that's easy to encode that we could use the no-commit proofs. What is the cost for the leader to generate a no-commit proof? Yeah, so basically, the leader has to, it receives like n minus f, 2f plus one of these signatures that has to verify these signatures and then aggregate them together. So I guess the cost for the leader, you have to verify these multi-signatures and then aggregate them together before sending it. Is there a way to reduce the aggregation cost for the leader who will inevitably be the bottleneck? Yeah, I mean, I think there are some of the other papers that we talked about today. I mean, there's multiple different types of collectors you could use or street-based approaches. So yeah, I think each one has different trade-offs versus having a single leader collect an aggregate versus having multiple nodes do it. Super tiny question. But why was n equals to 5 in famous? Is it 4f plus one or 3f? Yeah, it is the same complexity as equals to 4f plus one, yeah. OK. Yeah, I think that's what you need for these two things. There is a high conjecture that you can do 3f plus one, but using the strong fairness property that we have. Yeah, no, I've seen a similar way of not relating to fairness, but to skipping time stamps. And you usually also need 4f plus one. So it makes sense. Based on the experiments that I did on the first host of protocols that we were implementing, I found out that in the BLS signatures, it's faster to aggregate the signatures. So the cost of the verification and the nodes are high. But the cost of aggregation is quite fast. So then what we did is we improved, reduced the verification cost that the receivers in. So instead of, during the VU chain, instead of each node verifying n number of QCs or n number of aggregated signatures, they just have to verify two aggregated signatures. And you don't have to use this with BLS. You can do it with any other signature scheme. But the cost of verification will be linear. But the problem will be the message complexity. So you cannot reduce the message complexity. And I guess that's... But what we observe is that during VU chain, the most of the cost that we incur is due to the verification of the messages, rather than the sending of message. Because the quadratic cost of message during VU change is just below 2% of the overall block proposal that you send in case of blockchain. So until you have thousands of nodes, we should not worry about the message complexity during the VU change. But we should be more worried about the computation complexity. That's what based on the experiments I learned. And the second point that I wanted to mention, I'm not sure if you have... Take a look at the Hermes VFT for blockchains. They are also using no commit proofs, a technique like no commit proof. It's not no commit... But so during the VU change, the next primary has to collect some evidence that specific request has not been committed before proposing the next proposal. But they don't use encoding. That's the way you have. Yeah, so that's comments from... And I'm not sure if you have looked at that and can point out more details of the difference of how your work is different to the Hermes VFT, the no commit proof that they use. Yeah, for sure. Yeah, thanks for your comments. So I guess going to your first point. So yeah, I actually agree that from our experience, we didn't really worry much about the aggregation cost as well because generally that was cheaper for us compared to verification. But I guess to your point here is kind of like if you try to use an aggregate signature scheme naively just by aggregating the QCs altogether. So that total aggregated signature for us, I guess the experiment I was showing, is much slower to verify because you have to do these like a linear number of pairing operations, which are generally the bottleneck compared to just verifying, it's still just one signature here, but the actual verification cost of two pairings was much cheaper. And I guess, yeah, we were really focused sort of the hot stuff definition of linearity. So we really wanted to not send a quadratic number of messages there. And then I guess your second point about the other BFT protocol like Hermes, I think that's what you said. So I haven't looked at it personally, but I'd say, I guess the really, I think the main difference, I guess that I envisioned here is really the aggregate signature scheme itself, like having this very special aggregate signature scheme, which is specifically built for the consensus protocol itself and has different notions of like unforgeability as where I'd probably say the big differences, but I'd have to look at the paper more in depth. Vincent, a couple of questions for you. The first, how portable is your formal verification method to other BFT protocols? And then just a couple of questions later just how practical are live misverifications? So yeah, for the Marco's point about the portability of this approach, it's true that we cannot apply it to all the Byzantine consensus algorithm, unfortunately. Part of the reason is that as far as I know, the model checker that we're using requires binary inputs, which means that you can only apply it to binary consensus algorithms. So you need a reduction of your multi value consensus algorithm to a binary one that you can then model check. So this is the first limitation. Another limitation is that we make some assumption in order to show the liveness and this only applies to partially synchronous algorithm. We don't have a solution for all the type of algorithm. And to answer the second question, which was related to how liveness is hard, I guess when you have a particular situation where you don't assume synchrony, then the interleaving of steps that are taken by all the processes pretty much explode the number of states that you have to explore in order to make sure that each path in the execution will end up terminating, which makes the problem, I guess, a little bit subtle. And that's probably why liveness is hard to prove. But for other type of algorithms that are synchronous, perhaps there is a simple way to proceed. Accepting your conclusion that it is now feasible to formally verify blockchain consensus protocols. How hopeful are you that this practice will become widespread in the next year to three years? So for this kind of critical application, if you really want to have a massive option of blockchain system for a crucial application where security matters a lot, I think more and more work will have to be done. And knowing that there are tools as advanced as Byzantine model checker that are usable even by non-expert in formal verification, I think it opens up a lot of possibilities for critical applications, yeah. Thank you again to all of our session speakers and I'm going to pass it on to our host for the next session.