 Our final speaker for this session is Lephteris Kokaris-Kojes from Novi Research and ISD Austria who will be discussing Jolteon and BIDO network adaptive efficient consensus with asynchronous fallback. So in this talk basically we looked into existing consensus protocols like HOSTAF and Tendermint and try to see what we can get out of them once we've kind of opened them. And the main motivation was that if we look at them, they really focus on getting a linear steady state communication and that is pretty important and we agree on that. But the truth is that once the system crashes and we need to kind of stabilize, you need a quadratic pacemaker. Of course there are probabilistic solutions to make it linear but if we are deterministic protocols, everyone needs to get everything so reliable broadcast so eventually it's quadratic. So we came with the question of in the end what can we gain by this quadratic cost in the crash when something goes bad. And the answer is two things. One is a system we call Jolteon, which basically manages to get the lower latency of Tendermint with the responsiveness of the view change of HOSTAF. And of course the cost is quadratic on the view change but that happens only when we have a view change. The steady state communication is still linear. And the second answer is Ditto, which is basically an extension of Jolteon where we remove the partial signals view change and instead have an asynchronous path, which gives us liveness even under for example, having Ditto's attacks on the leaders. So, let's look a bit into how we achieve this. And to start we can just look into hot stuff. The basic idea of hot stuff is that we have blocks, every block gets a quantum certificate, and then we build a chain, and one we get a nice three chain, then the third block basically commits the first one. And if we look into the host of rules we can be very quickly convinced that any fork from the certificate K is going to be both safe and live when the leader grasses for example. On the other hand, we can look into something like Tendermint. Tendermint can easily be constructed as being a two chain version of hot stuff. So we remove one chain and we have to look a bit more strictly. However, then if we look into how the view change works, we realize that a responsible fork from K is not live and by responsive we mean that once the leader gets two plus one messages, they actually try to do something. And then for Tendermint to actually get a live view change, the leader has to wait for Delta and make sure that the leader gets messages from every honest part. Now let's look into why this happens. So let's try to run a view change on Tendermint. We have some leader that managed to create block BK plus two so there was the K plus one certificate and the leader sends this to a single honest party. Everyone else doesn't manage to get it and then later classes. Then the new leaders shows up just around what is the best course you've seen and gets from the two F honest parts that didn't get the last block K and some Byzantine party can also send K why not whatever we want to send. And the leader says okay that looks fine I'm going to propose it for from K. But the thing is that once that folks proposed the Byzantine party does not accept the proposal and the honest party that got the certificate from K plus one also cannot accept the proposal. As a result, this is not live. So the leader will be timed out and we're going to have to do the same again. So we might ask ourselves, can we, for example, relax a bit the locking rule maybe this way we can help the leader run. And if we try to do that we're going to quickly see that this is not safe. So for example here very similar example. So the previous leader that has sent the K plus two blocked all these parties, they vote for it, they create a certificate. And as a result, the next leader gets a nice certificate and says okay BK plus one is committed we're in a two chain rule, and then crosses. Since no other party saw K plus two certificate, they're still locked in K. So if the Byzantine leader shows up now he can easily propose a fork on K plus one. That would be fine. No, propose. Yeah, no, no, it's the K plus one. So he proposed a fork on K, so BK plus one is forked out. And as a result, the lead the previous leader that class is no longer safe. So, as a result, this is unsafe we need to do something better. The basic solution here is that the leader actually has to prove that no one else saw this K plus one certificate. And as a result, the proposal is actually fair, because basically block K plus one has cannot be committed. And that's basically how we do it. This is actually very similar to yesterday's talk of Neil with no comments proof because you can see what would generate also as a no commit proof of the block K plus one. How this works is that every party is going to send to the leader. What is the highest you see they have locked. The leader is going to collect these two F plus one form certificates and put them inside the block after review change. So if you see this block is actually linear in size but these are only blocks after review change or normal blocks are still constant size. This is actually enough to convince every party that there is no that every part that there is no one that has committed something and they can actually unlock what they thought is the highest certificate. If a party that has committed K plus one. Couldn't unlock. So maybe if we look into an example of why this is safe. If here we have a bonus parties that gave their high QC is being K plus one. So this is the block there's only K plus one here. But if the leader forks over K plus one that should be fine because there cannot be any K plus two certificates as K plus three certificate as a result. The block K plus two has not been committed. Even if someone saw the K plus two certificate. This would be only have a minority of nodes so not enough votes were collected for K plus it will be created as a result. We can easily fork over K plus one. We look on the other hand, if someone has committed. Is the same. Yeah, if on the other hand someone did commit block K plus one. This means that the certificate K plus two has been formed for it to be formed it means that F plus one nodes have actually voted for block K plus two and witness certificate K plus one. As a result when the leader is collecting the highest certificates during the few chains, at least one of them will reveal the K plus one certificate to the leader, and he will not be able to for before. I hope that is kind of clear, but you can look also into the paper for more information. And we actually implemented and compared to hot stuff and as expected we see a latency reduction of around 33% and that's basically because we get one less round trip when we actually commit the block. Next, let's move to the top. And basic ideas here since we are paying quadratic for the few chains. Why aren't we doing something better why are we creating a protocol that kind of withstand the bad network that we are just being hit like to get in a few times your network is bother your leaders and being under attack most of the time. So if this is the case, instead of trying to wait until just the shows up again and we have synchrony, we should be able to make advancements even if we have a bit less network, which is I guess also the message that point had beforehand. So, how do we do it. Yes, the first question to ask is if you look at the example that I said before on tendermint and why it is unsafe. If you actually ask yourself, is there a specific use case where tender me actually tendermis leader actually could be live without waiting for delta. In this example, there is a single leader that has this ability, and it is that on this note that got the highest certificate that on this note doesn't wait for no annex that on the note has the high certificate and he will not propose the bad block. As a result, if we create a leader less you change so everyone can be the leader. That one node is going to give the lameness of the system. So, if we look into how the view change of data works, we still have this nice to change. We have the strict locking rule that tendermit has no the more relaxed and Jordan has. And what happens is that when a node realize that the leader is crushed. The node basically says okay, in my point of view that should be the next block I fork the chain with my best certificate and every node does the same. Some notes in my previous example are going to fork the chain at the place where they're for us not live. But there has to be at least one node that folks it correctly. And that one node is going to get liveness is going to get a certificate from everyone else, and continue the chain of that on this note. Once everyone else realizes that they're going to basically update their prediction and extend the best thing they know. We're going to do it a third time and that is going to be because we want to live in a synchrony. And once we have to have plus one value chains inside the view change we can just flip the coin. And hopefully one of the leaders that actually finished the chain with constant probability is going to be elected. And then we basically act as if nothing else has happened. We look into the chain of that leader, we say that is the only thing that has happened we have no idea about all the other folks in the view change. And we look at the chain as if it is a classic to chain host of chain. And this, if you can see here, make sure that we commit a block, every few chains, even if you are under attack, even if F nodes are crushed or being due to us. This is an asynchronous way to always get liveness. During unstable networks. And then we have a leg the leader, the leader is going to continue doing the job of the fast path of the partial synchrony, propose a blog, yes, commit, etc. So we implemented and evaluated all those protocols and we also been evaluated a version of the Vava, which is an asynchronous protocol. That is basically very similar to the view change of detail. And if we can see here, we've already with 10 to 15 nodes. And this is the execution with no fault so the very nice synchronous network execution. So here we can see that the asynchronous solution is of course the slowest both in latency, and it doesn't achieve that high throughput. That's expected because of the quadratic communication complexity. Because Jordan, hot stuff and d2 are more or less similar the changes are basically a bit lower latency for Jordan and d2 because of the one less chain, and then we get a bit more capacity also there, but very minor changes in the common case. The big difference shows up when we start having faults. So in this experiment we crash 140 nodes and 340 three nodes with a 20 node experiment. If we see one nothing or everything is beautiful you see here how much faster the passwords protocols are, including the two who pays a small price here because sometimes a huge change is going to be triggered because of instability. While Vava suffers significantly. The moment we get even a single fault. Vava starts to win by much d2 manages to go somewhere in between and Jordan and hot stuff are actually getting slower and that is because of the rotating leaders than the two chain rule. And with three faults, it's even more prevalent that the protocols are actually getting view change all the time this is actually a five second view change so it actually pays off we did try to find the best parameter for a view change in a second and you can see that there is a high cost here. Finally, we also basically simulated an attack on the leaders by crossing the leader every time they're elected, as expected hot stuff and jolteon are basically not live there's nothing happening here they're on zero. While Ditto and Vava behave very similarly. Vava is slightly faster, and that happens because Ditto, due to its adaptivity sometimes has to wait for a time out before going to the view change asynchronous path. And basically we can see that here where the latency is higher and that's basically the time out. We have evaluated here the case where we present a fallback so basically the idea is that in Ditto if you get a time out and then a second time out, you're not going to wait for a time out then you run five asynchronous runs you try again and you try to adapt that way. But I would say it's an open question on how best to adapt between how often do we invoke the fast button how often we invoke the slope up. So I'd say that's it. And I'm open to questions. Thank you.