 We have 15 minutes for question and answer. First question is for Christian from Marco. You mentioned stellar is similar, but not quite. How would you capture the qualitative difference? Which properties are guaranteed by asymmetric thrust, but not by stellar and if applicable vice-person? The simple answer is the following. Asymmetric quantum systems as we formalized them. When every person trusts in the same way, every node trusts in the same way, then we get out the corresponding standard Byzantine quantum system. That means we can use exactly the same protocols with the standard protocol of the quantum systems as long as they don't involve too much cryptocurrency. That's what I showed in the example where it's just the blue condition was replaced by the red condition. And that means the quality that stellar seems not to have because stellar protocol apparently is different. And I haven't seen a formalization of how stellar protocols would reduce to the normal protocols or how the stellar model would capture the normal protocols. The ones that we see for symmetric thrust that we have in the textbooks. What if some of the trust assumptions by some nodes don't hold? Yeah, that's an important thing because I skipped over the slide. There are, typically traditionally, we have an assumption in the systems that nodes are either correct or faulty. And if you're not correct, then you're faulty. But in this world now, there's a new shade of being correct, namely, you are correct but naive. So the correct nodes are among the correct nodes we can distinguish wise ones who made the right trust assumptions or chose the correct, chose the right friends and naive ones who chose the wrong friends. And protocols cannot guarantee much, unfortunately, for naive processes because they are too easily fooled by the actually faulty processes. This can be qualified very simply. I expressed very simply in a mathematical structure whether or not the failure assumption is contained in the failure system that a node makes. What is not the most optimal but the simplest to implement out of all Dumbo versions? One interesting point I would like to say that essentially most of the asynchronous protocol are easy to implement in the sense that we don't have any manual timeout. Well, I accepted that the BDT thing have some timeout. But if we really have to compare the simplest one I think the original, maybe we call Dumbo classic in the success last year gonna be the simplest because structural wise is really just RPC plus composed with caching addles and we'd be direct like as a black box. So that should be very simple. How different is Jolteon from the fast hot stuff? And then a follow-up question, do you think the bit complexity is, ditto is high during view change? You might need clarification on that but maybe reducing the block size during view change might help for networks with low bandwidth. So like for fast hot stuff, I would say that in general Jolteon fast hot stuff for the locomotive is not a very amazing idea. It's basically taking PBFT and adapting it to hot stuff. So I'm not sure how different it is. I guess there's some difference how we show the no commit proofs but I would say that the formalization that Neil showed yesterday basically encompasses every one, all of these three protocols. Now for ditto, like the bit complexity is, if we talk about bit complexity and squared so it is the same as Jolteon's but I guess I want to stress again the message that Alberto made yesterday that we should not be running consensus on the payload. We shouldn't have blocks where we run consensus. The mempool can send the blocks asynchronously without any timeouts. It's not a hard problem. It doesn't consensus and we should be running consensus on the metadata. And then the bit complexity might be in squared but it's metadata, it's kilobytes. Is there a work where trust assumptions for a node or quorum evaluates a predicate over some properties of the node? For example, quorum power greater than expression or quorum age greater than expression? The asymmetric trust or even the generalized trust the general quorum system whether they can be mapped down to a single number let's say between zero and one or zero and a hundred percent. And if this were possible I don't think it's easily possible. The single reason is that if it was just so directly expressing what we can express also with the generalized trust then it could mean it would mean that we are back to the threshold world where 33% is strictly less than one third and 34% would only be F plus one because it corresponds to more than one third of the power. Because you could just I mean the normal protocols are expressed in terms of single units or nodes but they can easily or equivalently be changed to different numbers or weights that the nodes have in expressing their votes and those could change. This is kind of forms of stake-based protocols here. So I think that it's an interesting question to think about how these generalized trust assumptions can be made more realistic and this is what we are going to look at also, yeah. Can trust assumptions be composed by other nodes maybe on different aspects such as reputation, uptime, etc. and the response from Christian is so far this is only one trust assumption all aspects flow into this. Then how we can recognize that we've spoken the last word on the current BFD consensus developments? How do we recognize when we have achieved this status? It cannot be that somebody who writes a paper that they claim they have it. It has to be some external factor for me. How do you say when you reach this state? Yeah, that's a great question. Actually, maybe first let me ask you one question. Is there a concrete low bound for all the asynchronous protocols like say runs, like four runs, three runs? OK, you're asking a specific technical question. It depends on the model. It's made more complex. I can try. I mean, I did something for my team. Yeah, I guess at least the first thing we need we want to match the low bound first, right? Then the final say, yeah, that's a great question. I mean, to be honest, the lower bounds didn't usually take into account message complexity, a bit complexity, they counted number of messages. I'm pretty sure Marco's PhD was counting number of messages, not bits. Yes, I'd say my answer is, yes, on the one hand, we want to find what's the lower bound and reach it. On the other hand, this one, I can give it to a master student and they implemented it correctly. Yes. Yeah, that's good. If it's not simply, even if it's super fast in theory, no one would, it's going to be full of bugs. And well, you know, if it's even if it's complicated, I mean, maybe there's just one or two implementations and then we have to make sure that somebody maintains them like OpenSSL. I can I can say where is the end from my perspective? When we have it implemented in some FPGAs, there's hardware, all these protocols, you know, if you need to fix it, you can fix it. But then it works. You don't think about it. You just have a total of the broadcasts, you know, when you need it without actually caring about it. I think this is where we can stop. In the meantime, we keep getting questions about the jobs and grants from Protocol Labs. So I will repost the links in Slack. Have a look if you have questions, reach out to us. We're more than happy to discuss.