 So now we're in the Q&A session. The first question is going to be for Kadeer about Dandelion. Sarah asks, how does Dandelion compare to other DAG-based protocols, for example, Spectre, Phantom or Ghost? Dandelion is not DAG-based, first of all. Maybe I couldn't show you, you know, it is our blockchain structure is not really DAG, it is where we keep the chain structure. As you see, we have macro blocks appended to the nature. So because of this, we didn't compare Dandelion with DAG-based systems, because we try to keep the blockchain structure because of what kind of consensus guarantees it provides, you know, because of this, we didn't compare our proposition with DAGs. Yeah, I was just curious, because I saw this kind of bucket style and multiple leader, that looked very similar to what I read in Marcus' work about BFT and MIR. Can you explain what is the difference between those two ways of doing that? Basically, you are right. Actually, it is available in MIR-BFT. Actually, I cited here, the third one is the MIR-BFT. Also this kind of design available in Manseus and Manseus-BFT also. So basically, they have similar approaches when we look at what they are doing. But the end result is a bit different because in the case of MIR-BFT, it is applied to improve the white area network performance of PBFT, as far as I remember, if I am not mistaken. I think the reason there is, well, you want to be able to scale past the network bottleneck of having a single leader, so having multiple leaders and doing so without incurring request duplication. Yes, exactly. What is different here? First of all, in our case, there is a single instance of consensus. We are deciding once on all microblocks. So basically in this exact figure, we have three microblocks, all of them decided in the single instance of consensus. In the MIR-BFT, as far as I remember, each leader proposing value and there is a consensus running for each proposition. Am I correct? Yes, there is one PBFT instance per leader. Yes. They're like you start from all leaders, but it doesn't have to be all leaders in the configuration, it depends. It has similarities, but it has differences also. The biggest difference, we applied this technique in the context of blockchain. The next question is from Marco, that is for Florian, asking, can you please discuss similarities and differences with regard to OmniLedger, which also relies considerably on clients? Yeah, let me try and if I'm wrong, left-hairs can rip my head off. I think the main similarity that Marco's quoting is the idea that in both case, you have a two-face commit where the client is your coordinator. I think besides that, the differences are there's not many similarities rather. OmniLedger uses a UTXO model and what clients are locking are their own coins, right? These are their own assets that they're responsible for. Nobody else cares about them. So there's no concept of concurrency between those. If a client in OmniLedger dies or whatever or loses its private key, these transactions, these assets are just frozen forever, it loses asset, it loses access to these assets and no other client cares about those though, right? So you don't need this kind of client-to-client recovery mechanism where you're finishing block transactions. The other thing is, since OmniLedger is some, a difference between Basel and OmniLedger, of course, OmniLedger is permissionless or it tries to be in subsamples, these shards and it dynamically re-does these shards. That's something we don't do. Instead, we don't need a BFT, a totally ordered or a state machine replication protocol like PISCOIN, which they use in OmniLedger to order all transactions. There is no such thing as an order. Each client does their own transactions, ask for their concurrency control result and replies back. So this is all out of order. That's safe because of core intersection and local serializability checks, but of course it can mean that you have to abort because this is an optimistic execution. So there's also no concept of an identity blockchain in Basel because it's a permission system. I'm gonna go back to a question for Kadir here from Alfonso and it's a, do you have numbers of the computational overhead for individual devices running dandelion compared to baseline algorithm? Basically, we did this kind of analysis. First of all, the number of computation means basically before submitting a block, nodes validating transactions and so on. So basically, nodes are submitting smaller blocks. So the cost is smaller when we compared with Algorand because in dandelion, as I said, in Algorand case, single leader submits a big block. In dandelion case, many leaders submit smaller blocks, but we didn't measure the, really how much computation codes adds dandelion, but what we observe, we observe in overall, it is doing a lot better than Algorand. We didn't really measure the cost of dandelion, computation costs. Marco, I wanted to let you expand a little bit on a comment that you made directed toward Ray's presentation, specifically talking about, did you consider combining Cowrey with multi-leader approaches, especially since that was kind of touched on in the first question. So I wanted to let you expand on that and then see Ray's response. So from my experience, what definitely, so what makes Cowrey scale better than PBFT and hot stuff is not message complexity. I mean, that metric is irrelevant. What's important is fan out, because like both hot stuff and PBFT have a bottleneck replica fan out of ON, and Cowrey does it. So that's fine. And it's normal that you get advantages in single-leader deployments. Now the question is, as you are moving to multi-leader deployments, Mir, we had Vansan mentioning Red Belly and others. It's questionable whether this will stay. So did you deploy Cowrey? I'm not saying it won't, by the way. It's just interesting to experiment with. Did you deploy Cowrey in the multi-leader setting? This is actually the question. No, we didn't deploy it in the multi-leader setting, but that was one of the things we wanted to do in the future as well, to expand upon that. Yes, I agree that the main problem is actually that the leader has to send the end blocks out, which is then the huge cost on the bandwidth, and then the leader, or then each process has to verify all end signatures, which then is then the cost on the bandwidth and computation. I was curious why you take this hard stance that the complexity doesn't matter, Pewity. I guess you're assuming that you are always using max, but I think that's not necessarily practical in many blockchain systems now. I know Microsoft has a lot of work on this called PAC, where you try to have auditability, and with max, you lose this non-repudiation property. So in practice, you do want to have signatures, and that is n square, that's not really desirable. So you can turn PBFT into a linear setting quite easily. That's what SPFT does basically, and some other stuff. So is that what you're getting at, or why you're happy with the n square? So I agree, message complexity-wise, it doesn't matter. I'm worried about the signatures. So signatures, so it's interesting to decouple, for example, block signatures for message signatures. So if you keep auditability and signatures for wake vocation and whatnot, it still doesn't mean that you need to sign each and every message, as like the original PBFT with message signatures will do. So if you want to sign, and then the question is how would you schedule? That's not quite true. I mean, in PBFT, the original one, you don't sign because you defer that cost to the view change, where you then pay the full price of signing when it becomes necessary, even when you're using max, right? PBFT doesn't work without signatures in a full, right? That's- It actually does. There is a PBFT version, which does completely without signatures. There's the Castro-Stokes version. Yeah, but it still uses signatures. I just read it two weeks ago. Maybe I'm wrong. Yeah, I mean, it doesn't, but I mean, it's a different question. You put a different question. So if you use signatures on blocks, then the question is, again, you don't need to sign each and every message. You might need to- So depends on how we define goals, right? If we just talk about total order, if we talk about the context that we discussed here, then basically you wouldn't need to sign. If you don't need to sign, then it's always like the load at the bottom of the replica. And the thing is, it's actually the same for, it might be even somehow worse for hot stuff because you, by channeling ON communication pattern through the leader, then you actually need to sign, because otherwise you cannot do what PBFT is doing. And then you have ON load on the leader, plus you have signatures. And that's actually problematic for hot stuff on large scale. So of course, if you move to a smaller fan out, like Kauri does, then you get the advantage. The question is, what happens when you have a multi-leader? But I agree. So if we change the rules of the game and then we say, okay, we want to have some signatures on the block for whatever reason, be it equivocation detection or slashing mechanisms and whatnot, then we need to see again what we are talking about. But certainly you don't need to sign each and every message that you don't need to do. That wraps up our session today on scaling and performance. So thank you again for all of our presenters, really enjoyed it. And thank you again for everyone presenting and participating in the questions.