 Hello, my name is Mikhail Kalinin. I'm from Harmony team First of all, I'd like to credit Alex Flasov who has been investigating in these complexities in aggregation Okay, let's move on so here is the You have Addestation votes you are already familiar with them From yesterday's talks about big and chain technical Introduction so we have cross links to shards. We have finality checkpoints and we have a fork choice vote as well so the Attestation is In a very important thing for the big and chain. This is how big and chain makes the consensus happen let's see how it's Progressing through a slot so at the beginning of the slot a proposer produces a block and Send this to the wire So all validators can receive it through the wire In the middle of the slot committees start to produce their attestations Basically, this could happen not in the middle of the slot One may produce attestation upon receiving this block and important this to its chain after that Attestors should deliver their attestations to the proposer of the next block to be included on chain and they are eager to do this because their Rewards depend on that and If a proposer of next lot for some reason fails to include some of those attestations Each those attestations should be Delivered to proposer of the slot of after the next one and so forth up to 64 slots ahead Okay, so what proposer does with those attestations it aggregates them and Includes them into a block body aggregation reduces verification complexity and it means that instead of verifying hundred of signatures You should in the best case you should verify only one signature pen per comedy It also reduces size demand on the block body This chart shows the y-axis is megabytes of data and X axis is a million of ETH at stake so this green line is The aggregated case and orange line is not aggregated case and you can see how aggregation saves our lives so Okay, let's talk about attestation delivery. How do we deliver those attestations to a proposer? We might probably do direct delivery just send those attestations directly to Let's say five or ten proposers of the next blocks whatever number we find reasonable But this this way is very efficient and fast But in that case we are revealing proposer identity. It means that adversary may match proposer public key and its network ID and Organize a coordinated attack to this particular proposer it kick them off of the network So we are not doing it that way instead of this we are using P2P overlay that includes all validators of the beacon chain and gossiping the attestations Through this overlay so everybody is happy everybody all proposers receiving those attestations nobody knows who a proposers who proposers are and these This P2P overlay is it big enough And everybody knows that there are some validators in it, but it's hard to To organize an attack on this because there are a lot just a lot of nodes and nobody knows Who is the proposer of the next lot or who are the participants of the committee in the network as network? Because identities are not revealed That gossiping to everybody Gossiping in the P2P overlay is prone to Byzantine failures and To cope with Byzantine failures we have to keep the degree of the P2P mesh on a Certain level and It brings redundancy the degree is the parameter that is just a number of Peers that your peer is connected to so let's say your peer is connected to 10 peers and You are receiving the attest the same attestation message from five of them and sending These attestation propagate and this attestation message to the other five so we have like redundancy by a factor of five and This redundancy is about inbound and outbound traffic and Also nodes must do verification to prevent epidemic attacks where the network can be flooded by just bad attestations in Valid and This brings a high demand on resources at scale so These numbers are a factor of Redundancy in the P2P of the P2P mesh and this like triple redundancy is the blue line and if Probably on the phase zero. We are okay. We are here. Where is it 10 million ETH valid 18 But when we start to approach to 50 million ETH we are having like 50 megabits per second as a requirement and This is calculated for six seconds per slot What do we do with this? instead of sending single attestation to this P2P mesh we are we want to partially aggregate them in relatively small overlays and only after that send it to the To the beacon attestation where everybody signed to This is where aggregating overlays comes into play So what options do we have for those overlays? We can build them for each epoch because We know What committees are in each epoch? 64 slot beforehand It's like six minutes So we could probably just build them for each epoch Or there is another option and to use shard subnets These are like natural overlays for for the ETH 2.0 So the requirements are to the overlay are Like first of all it should be relatively small and efficient enough To do this partial aggregation and the fractions of slot because we need to send those aggregates To that P2P mesh so it takes We are very restricted in time So it should be incentivized The participants of these overlays should be incentivized to do the aggregation and the only one who are incentivized with the Committees that produce those attestations. It should be Byzantine tolerant being relatively small. It should be tolerant to kind of Sibyl and Eclipse attacks and if we want to build Overlay for each epoch we have a six point five minutes to do that. So this is the time restriction Okay, so Peripah Overlays are very efficient because you can build This overlay with whatever topology and properties that you want and It could be after that leveraged by your aggregation strategy but the drawback of that is that you need a reliable source of peers within a small amount of time and This topic discovery will probably not help here Because if you don't have a reliable source of peers Your overlay will be prone to Sibyl and Eclipse attacks. So this is very difficult part of building this epoch per epoch overlay. We can use shards of nets. They already exist. They are relatively Big and they are like more long-playing Network units and they hence they are resistant to kind of attacks like Eclipse and Sibyl and But the problem is that one committee is spread across all the shards of nets So let's say you have ten peers Ten validators of one committee in one and shards of net number one then in shards of net number five and so forth and They just can't talk to each other That is the problem for the aggregation strategies. So two strategies. So one is coordinated It's like handle. You probably heard about that it's very like optimal So you can get clean aggregates without overlapping with lower network demand but these kind of strategies requires those overlays to be built per epoch and We have this difficulty with Reliable source of peers and they are not p2p friendly. So it should be There should exist direct connection between validators and that's why We have that's We are getting back to this a Non-imity problem here again We can use shard subnets and to fire a gossip based strategy So it's just you know, everything is ready for it, but And you can send messages you can send at the station within one subnet make those aggregates And yeah, well it did or so the committee that produced those attestations can Aggregate them, but it requires higher network demand because we're working in a p2p Mesh and it's not that a bias. What is the threshold for the aggregation and when it's ready to be sent to the beacon attestation topic and That's probably the problem here And this problem is amplified by spreading committee across different Shard subnets because you don't know how much of we Participants in your shards net so you don't know how big aggregate you can combine Within the resources that you have that far so Let's summarize that we have to use p2p overlay to deliver attestation to proposer and at scale it puts some big requirements to bandwidth We we are considering a partial aggregation in a smaller Overlays as a solution, but it have a number of complexities There is also another possible solution to use onion roti routing and similar techniques But they will bring Their own complexity as well So we need to investigate just investigating that and phase zero and probably phase one is not affected because it's not It's not it's not expected to have like a big amount of ETH validating So thank you so much. I'm done