 Our next speaker is Mikhail Kroll from City University of London who will be discussing shard scheduler, object placement and migration in sharded account based blockchains. Thank you, I'll present shard scheduler. This is the 20 work with owners edge Alberto Mustafa and the 10. For the outline will start with some blockchain and sharding then the well defined the problem of transactions in sharded blockchains I'll present our system and we'll finish with some evaluation. So as you probably know at this point blockchain is well a chain of block inside those blocks we have some transactions. The blockchain is maintained by some miners miners receive transactions from from users, they verify them they order them they pack them in the blocks and put them on top of the blockchain. With those transactions or with the blockchain generally maintain some state. So we have an initial state defined by the Genesis block and then every single transactions modify the state. State is a state of a set of objects with two main data models for for blockchains so either account based where we keep track of balances of users. Or a UTXO model where we keep track of coins who owns them and whether they were spent or not. So in this talk I'll focus on the account based model. The blockchain is great but it has some limitation issues so performance issues so as we all know there is a problem with the low throughput, and also high user perceived latency. And this is because of this global consensus so every single minor has to agree on the common state so it to disseminate transaction blocks to everyone, we have to store every single blocks and every single minor has to validate every transaction. That's why sharding was proposed as one of the solution to this problem in sharding we basically split our our blockchain into groups, so that every single chart has its own chain of blocks, it's on miners, it's on transactions and holds a subset of of state. And the hope here is that if one chain can support a specific amount of transactions if you add more chains we can can hopefully scale the blockchain almost arbitrarily. Quite often we also have with the shard blockchain is what we call the beacon chart of the reference chart, this is just a shard that is responsible for coordination of the entire system. This beacon chart will usually store the block headers of every single block being produced by every single chart for synchronization, but also will be responsible for assignment of minors to the shards says we had also during the previous talk. In the blockchain at this assumption of the honest majority of the miners, but then we have to assume, we have to also make sure that in every single chart, we have honest majority as well. So usually the big the beacon chart chart will generate some randomness and based on this randomness will assign minors to the shards in an unpredictable way. The beacon chart will periodically migrate minors across shards to compensate for minor joining and leaving the network. So if you consider transactions in the account based model, let's start with some simple transactions to have Alice here willing to pay Bob a specific amount of money in the account based model if you want to implement it will first of all need to decrease the balance of Alice, then we'll have to increase the balance of Bob by the same amount, and also make sure that this whole operation is atomic because we don't want to make the money disappear or create new money out of thin air. Now if you want to perform the same transaction in a sharded blockchain, if Alice and Bob are in the same chart, this is easy, we just need one transaction we put it in the chart where they're located, and it's done exactly as with the single train However, when now Alice wants to send something to Charlie who is in different chart that's slightly more complicated because now we need one transaction in shard one to decrease the balance of Alice. Another one initial to to increase the balance of Charlie, and also we need some synchronization between two shots to make sure everything is atomic. And it gets even more complicated with the smart contracts because now if you have if you consider this smart contract here we have a simple procedure we have a list of users to pay we have an amount we want to pay every single user. And the function just iterates for the users, if a user has less than 10 coins we just pay them some money. And again, if now all the users from this list are in the same chart, this is easy, we need a single transaction, a call calling the contract and it's done. On the other hand, if we have, let's say 100 users spread across 50 charts. Now we need at least one transaction in every single chart tool to update the balances, and also some global coordination between all those charts and all the minors that are involved in those charts. So the bottom line is here that the cross chart transactions are costly, it can get even significantly more, more expensive if we have smart contracts. So that makes object placements in sharded environment very important question so how do we assign objects to to shard because they can, they can that can determine the final performance of the of the blockchain. And currently we use a half the base approach. So what we do we basically divide the whole half space among among the charts, and then based on the account ID, we just assign them to the charts. This works well, this is simple it's verifiable because everyone can just take an account ID, hash it and determine to which are the specific account belongs. And this is a good long time balance, because if now we assign accounts to charts. Randomly, we might assume a long time uniform load distribution. On the other hand we have some classic problems from distributed systems so we have no data locality, we might have two accounts that communicate very often but are in different charts, and then generate a lot of cross-chart transactions. And also we cannot adapt to any short term load spikes because we just cannot move the accounts. In fact, we were, we are asking whether we can migrate accounts on the fly across charts to improve the performance and eliminate those problems. So our system is basically a system that observes the interactions between the accounts, and the load balance across all the charts, and then output some account migration recommendation so suggesting, for instance that this account should be migrated to this chart at this point of time. So that we enhance the state of every single account with what we call an alignment vector. An alignment vector basically tells us how many transactions with every single chart each account had. In this case, we see that Alice had three transactions with chart one, one transaction with chart two and zero transactions with chart three. Now, when Alice sends a transaction to Charlie in chart two will increase her alignment towards chart two will buy one and also for Charlie will do the same but with chart one. On top of that, we enhance the be contained with some load statistics, so that we have information about the load of every single chart in the network. It can be expressed either as a total size of transactions as in Bitcoin, or complexity as expressed in gas for for Ethereum. Now having those information chart should learn when it encounters across our transactions. First of all, we extract the list of every single chart that is that is involved in this transaction, and then based on the load statistic that we have on the be contained, we choose the main chart for this transaction which will be the least loaded chart across all the charts that are involved in this transaction. And for every account being modified by the by this transaction, but not being in the main chart, we ask the question whether it should be migrated to the main chart. And we answer this question by observing the alignment vector that we introduced before. The high level idea here is that if there is a strong alignment from an account towards its chart where it's currently located, we are not likely to move it because it suggests that there is a strong community within the chart and we don't want to break it. And if the alignment is not that strong and also the account has alignment towards other charts were more likely to move the account for load balance. So if the simple mechanism we achieve load balancing because well the main chart is chose based on the is the least loaded chart across all the involved charts. We preserve existing communities using the alignment vector. And also we minimize the number of migrations because we consider a migration only during a cross chart transaction, which means that will never, for instance, move an inactive account. We also introduce a migration threshold, just to prevent accounts flip flopping from one chart to another. Okay, so now we have those migrations. We know that they will improve the performance so the question is how do we enforce those. And for multiple reasons that we listen the paper, and we decided that miners should be the ones enforcing those decisions as a part of the consensus protocol. We can do it because all the decisions are based on only on on time data data and can be verifiable but everyone by everyone in the network. So basically enforcing those migrations is a part of transaction processing a transaction is considered valid, only if it also contains the requested migrations of the accounts. Now there's still a problem because if you consider this case we have Alice again sending a transaction to Charlie in a different chart. And let's assume that the chart scheduler recommendation is to move Alice from chart one to chart two. Now if you're a miner in short one this is problematic for you because your revenue is based on the amount of transactions in your chart. And if you move away some accounts well potentially you will end less in the longer time so you don't want to do this. This is problematic because even if you made the enforcing the demigration as a part of the consensus now the miners are not incentivized to follow the consensus, because they can just add more money by by doing something different. So that's why we also propose a new economic model for the shattered block chains basically to fix this problem and to align the rewards that every single minor gets with the well being of the entire blockchain rather than focusing on the on the chart they're currently assigned to. And for this we use this mechanism of migration of miners. So you may recall from from the beginning that miners are being migrated by this beacon chart across across charts over time to compensate for any network dynamics. So what we do for every single epoch we calculate the total amount of fees that were collected by every single chart. So in this case we see that chart one collected 100 coins in epoch one. We calculate the participation of every single user so for which part of this money each minor was responsible responsible for. So here our minor was responsible for 20% of this of this amount. At this point the miners will not cash this amount, but we wait until this minor is migrated in the next epoch. Again, this is unpredictable. So our minor goes to start to. And at this point it will cost 20% so it's participation from before, but of the amount gathered by this chart by this new chart in the previous epoch. So 50 here. And because this those rotations are unpredictable. We prove in the paper that this motivates users to follow the migrations that we that we implemented and care about the the throughput of the entire blockchain rather than focusing on a specific chart. So just a few results to to finish, we implemented the whole system as a Python simulator at the beginning we compare it against half the base approach. So what you've seen before and Mattis Mattis is the graph partition algorithm so we fed all the future transactions, as a graph of interactions between accounts. And then we told Mattis try to split those accounts into charts in an optimal way so we keep all the communities but also try to perform some load balance. But even with this, we observe the chart should work and improve the throughput by more than three, three times for for stick six charts. This is because all the previous approaches they have a fixed association so they cannot adapt for new communities being created or disbanded or for some short time load spikes with the higher throughput we also lower the the average latency by up to 70% We also implemented chart should learn on top of train space train space is a sharded environment for with the support of smart contracts. We implemented our system on top of that and place it on Amazon AWS and again we observed much higher throughput and also reduce the user perceived latency. To conclude, chart should there is a migration recommendation system that is fully deterministic based uniquely on on train data and can maybe part of the consensus protocol. Although operations are very likely so we don't introduce any significant overhead on top of regular transaction processing. We also propose a new economic system that binds the rewards of every single minute the throughput of the entire blockchain rather than focusing on a specific chart, which basically incentivize honest node to remain honest and follow the consensus protocol. For the performance we observe up to three times throughput increase and up to 70% user perceived latency reduction. And I think this is it. Thank you.