 The first speaker of this session is Kadir Korkmaz. My apologies if I'm saying her name incorrectly from the University of Bordeaux, presenting their work titled Dandelion, Multiplexing Byzantine Agreements to Unlock Blockchain Performance. So Kadir, I'll let you take it from here. Hello, everyone. I am Kadir Korkmaz. Today I will present you our work that is Bandelion Multiplexing Byzantine Agreements to Unlock Blockchain Performance. This work is supervised by Joachim Brunov-Quarrix, Sonia Ben-Nuhtar, Lauren Reveiller. Blockchains are highly replicated, appended on a distributed databases. And in blockchains, data is appended in the form of blocks, as you know. And in blockchains, each block contains the hash of the previous block, except the Genesis block. And in the figure, as you see, there is a chain of blocks and the first block is the so-called Genesis block and as you see, except the Genesis block, all blocks point to the previous block. And as you know, all blockchains uses a special consensus algorithm to decide on the blocks to append to the database. Bitcoin and proof of work. Bitcoin is the first blockchain system and it is consists of a set of connected nodes by an overlay network. And proof of work is the consensus algorithm of Bitcoin and it relies on crypto puzzles to elect block proposers. And as you know, crypto puzzles are to solve easy to validate. And there is no efficient algorithm to solve crypto puzzles, basically nodes randomly trying to solve the puzzle. The definition of puzzle is that trying to find a specific block that has a smaller hash value than the target value. And the target value is dynamically arranged in a way that there will be only a single block proposer for each 10 minutes. And nodes are using longest chain rule. If there is a two block proposed in a short time intervals, it means that this will cause a fork on the blockchain and nodes are always choosing the longest chain to resolve forks. And proof of work is by design, see the resilience. That means that it is an attacker cannot gain any advantage by creating fake identities to change the decision in the system. And it is temporary resilient, which is very good feature. So basically, when you try to change any of the content of a block, basically the hash of the block will change and the chain will be broken. You should pay the cost. Otherwise you cannot change the any block contents. And also it is scalable currently in the Bitcoin network there is more than 10,000 nodes and the system is working. Proof of work has many cons. First of all, it is using a lot of energy, excessive amount of energy because of crypto puzzles. And it is inherently slow. As I said, there is only a single block proposer for every 10 minutes on average. So because of this, the system provides low throughput and high latency. Also because of forks, the decision of the consensus is not deterministic. In the figure, as you see, there is a blockchain. At the bottom, we have a Genesis block and the red blocks are on the main chain because they are on the longest fork. And as you see, there is purple blocks and they are not considered on the blockchain because they are not on the main chain. As I described, proof of work has many problems and researchers proposed use proof of stake to handle the problems of proof of work. Basically, proof of stake replaces the ownership of computational resources with ownership of stake. So this design also civil resilient because an attacker cannot gain any advantage by creating fake entities because it needs to have a stake in the system. Also, this design is energy efficient because this doesn't rely on crypto puzzles to elect leaders. Algorithm is a proof of stake-based blockchain consensus algorithm. It is one of the most efficient proof of stake-based consensus algorithms because of cryptographic sortitions. Basically, the consensus contributors selected in an interactive way. And it scales very well. It scales up to 500,000 nodes. It is tested, it scales very well. But there is an issue related to Algorithm that issue is that block propagation time mostly dominated by the round latency. It is stated on the paper also, which is saying that the consensus time always takes 12 seconds but block propagation time mostly dominates the round latency. And we implemented Algorithm and you see the results from our implementation. As you see, when we try to change the throughput of the system by changing the block size, it doesn't work. It doesn't provide the expected results because as you see when we change the throughput of the system in the x-axis, we have block size and on the y-axis, we have throughput and latency. throughput increases up to some point then it doesn't increase anymore. Also latency increases linearly and it increases linearly by the block size. So basically Algorithm has this issue. And Dandelion improves Algorithms. So basically Dandelion improves Algorithm by electing multiple leaders. So Algorithm is also electing multiple leaders but Algorithm using only the block of proposed by single leader. In our case, we are electing multiple leaders. These multiple leaders distribute sharing the cost of leadership basically the cost of leadership in terms of computation, the cost of leadership in terms of communication. So basically the cost is distributed between many leaders. And also each leader, elected leader submits a small block and at the end of the consensus round, we are merging them to create a big block. Basically smaller blocks disseminate faster in the network and this solves the issue of Algorithm. As I said, the biggest thing dominates the round latency is the block propagation time. And in our case, we are disseminating smaller blocks. As I described Algorithm is a proof of stake based blockchain consensus Algorithm and it scales very well. And it is using cryptographic sortitions to elect commit members and leaders. So basically cryptographic sortitions you're using verifiable random factions, not locally running sortitions to learn whether they are elected to communicate for the specific round as a leader or committee member. If they are elected, they are sharing the vote or they are submitting a block with a proof. So basically in the system, anybody can prove the result of the election. And this design is energy efficient because it doesn't rely on cryptophases. And in Algorithm system not represented according to all stake. So if a note has more stake, then it will have a more chance to contribute to the consensus. Also, another good feature of Algorithm, Algorithm is using a special BFT consensus Algorithm, which is deterministic. If the decision is final, it means that it will not change in the future. I want to briefly describe the round structure of Algorithm. In each round, Algorithm uses the previous round result and it elects leaders and commit members using cryptographic sortitions. And Algorithm selects up to 70 leaders, but only a single block. So in the Algorithm, leaders submits two kinds of messages, block messages and proposal messages. Block messages contains full block. So basically by design, they are big messages. Proposal messages contains metadata of the block plus a priority. Algorithm has a deterministic function which assigns priority to each submitted block. So basically proposal message contains priority, the purpose of this message to decrease the unnecessary block propagation because they will eventually have a consensus on one of them, which is with the highest priority. After the leaders submits blocks, all notes in the system ways to receive propositions. So for a specific amount of time, then commit members starts voting on the proposals. Basically, they are trying to decide on a highest priority proposal. Algorithm's consensus algorithm has two phases, reduction and binary BA star phases. In the reduction phase, many decisions decreases to either an empty block or a block with a highest priority. And Algorithms at the end of the round can produce two kinds of results, final or tentative. If the result is final, it means that it won't change in the future. It's final, it's done. If it is tentative, it means that we decided on a block, but the decision might change because of some issues. So mainly, Algorithm can have tentative result if the network behaves asynchronously. Dandelion improves Algorithm with following features. First of all, Dandelion elects multiple leaders as Algorithm, which is already electing multiple leaders, but it doesn't fully utilize them. And in Dandelion, cost of leadership is distributed computation and communication. Dandelion creates transaction hash buckets. Basically, each transaction put in a hash bucket in a deterministic way. And Dandelion assigns to each leader a transaction hash bucket in a deterministic way using the result of sortition. Also, at the end of this phase, in Dandelion, elected leaders submit disjoint blocks. And so eventually we can merge this disjoint block to create a bigger block. Here you see the blockchain structure of Dandelion. And in our system, we have two kinds of blocks, macro blocks and micro blocks. Macro blocks are logical blocks. They are not disseminated. They are locally created using the micro blocks decided. So, and micro blocks are the real blocks submitted by the distinct leaders and they contain transactions. And in Dandelion, each micro block points to the previous micro block, as you see from this figure. In Dandelion, we have an important system parameter which is concurrency level. In this figure, you see the concurrency level is set to three. It means that each macro block consists of three micro blocks which are submitted by distinct leaders. If the network is not synchronous, if the network behaves asynchronously, time to time Dandelion can end up with two micro blocks in a macro block. In this case, as you see, it's possible to have a macro block that has less than CL micro blocks. The run structure of Dandelion follows the run structure of algorithms. So basically, not run cryptographic sortations, then after running cryptographic sortations, leaders run the bucket assignment algorithm. Then after that, they can submit disjoint blocks. And almost in the system weights for a specific time interval, then committee members starts voting on the highest set of micro blocks. So basically, the important point is here this. In algorithms, committee members votes for a single block. In Dandelion, committee members votes for a set of micro blocks. And they run reduction and the line binary B A star. At the end of these phases, the consensus algorithm of Dandelion might return a final or tentative result as in the case of the algorithm. We have implemented the algorithm and Dandelion in goal length to show the performance improvement we achieved with Dandelion compared to algorithm. We deployed two sets of experiments on grid 5K. Basically in the first set, we measured throughput and latency for the normal case. In this set of experiments, we used 10 machines with 1000 nodes. Basically we have run, we have deployed 100 nodes per machine and we measure throughput is the appended data per second. In the scalability experiments, we try to answer this question, whether Dandelion scales as algorithm. So basically in the set of experiments, we run them, we used up to 100 machines, up to 10,000 nodes. And in our experiments to emulate the white array network conditions we used, we kept the bandwidth of each process to 20 megabit per second and we added one way latency to each channel, 50 milliseconds, around the time means that it is 100 milliseconds. In these figures, you see the latency and throughput figures of our Dandelion and algorithm implementations. On the X axis, we have macro block size on the Y axis. In the first graph, we have latency. In the second graph, we have throughput. And as you see, we have different CL values. The CL value one means that it is classical algorithm. CL value two or above means that it is Dandelion. So basically how CL value works, it is like this. Let's consider this 20 megabyte macro block size and let's consider here a CL value two. It means that in this case, we are appending two micro block, which each has 10 megabytes. So when we combine them, it's 20 megabyte. When we look at CL four, each micro block has five megabyte size. When we combine them at the end of the consensus, it's 20 megabyte. As you see, red solid line is algorithm results and as you see, algorithm results, when we increase the block size is increasing linearly. The latency is increasing linearly. It is the case for our Dandelion implementation, but here the key point is that when we increase the concurrency level, we are getting a lot lower latency. So the slope is a lot lower here as you see. With CC 20, we are getting a lot lower latency. And when we look at throughput results, it's the same also. With algorithms, the throughput doesn't increase at some point, but with Dandelion, it's possible to increase throughput with increasing block size when we increase the CL value. And here, when we increase the CL value above 16, we didn't see noticeable performance gain, but up to 16, we measured four false throughput and latency improvement. And to look at the scalability results, basically... Padir, sorry to interrupt, but we're right at the 15 minute mark. Yes. If you could wrap it up in about 30 seconds, that would be great and then potentially add a few more pieces of information into the Slack channel. That would be fantastic. Sorry to interrupt. Basically, in the scalability experiments, we run the Dandelion up to... We run... We test the scalability experiments and we measured run duration increase and throughput degradation up to 10,000 nodes using 100 machines. As you see, Dandelion and Algorant scales in the same manner. And as a result, we improved... We provided using the Dandelion technique four-fold improvement in the latency and throughput on top of Algorant and Dandelion scales as Algorant. Thank you.