 Good. Thank you all for coming. Myself, Shawei, Chicheng, Donkred, Justin, and Vitalik, who is apparently on this floor, will today give you a deep dive into the phase zero specification. That is the beacon chain of the Ethereum 2.0 protocol. Today, we will get as technical as we can with the time permitting, and at the same time, point you to portions of the spec so you can dive in deeper, and hopefully, just get you more acquainted with what's going on so that you can better technically understand the problems at hand and the solutions, and can dig in and contribute all that. So, we're building this thing as Ethereum 2.0, it's a sharded protocol. There are many shards connected to a central beacon chain. It also has a loose coupling to the existing Ethereum chain, which at the beginning just supports deposits coming into the beacon chain. Today, phase zero is just this pure proof of stake chain connected to the Ethereum 1.0 protocol. So that is what we'll talk about today. I believe at the same time, we'll at least show you the scaffolding upon which shard chains will be connected, but we will not dive deep into this portion of the protocol. The beginning, what we're gonna start off with today, is we're gonna look at some of the core building blocks of consensus. That's the Casper FFG protocol, the L&D Ghost fork choice randomness, and BLS signatures. After that, we'll take a short break, maybe take some questions, and then we'll dive into the actual, some of the mechanics and concrete instantiation of the protocol, the actual state transition and things and validators and things built on top of some of these components. So, FFG applied to proof of stake. So there was a paper written probably about a year and a half ago now by Vitalik and Virgil Griffith, Casper FFG consensus protocol, generally built to be layered upon a block proposal mechanism, the original being a proof of work mechanism, but there's some modifications to bring this protocol to be on top of a proof of stake protocol, and more importantly, some modifications that get us to where we need to go for this protocol. So I'm gonna go over some of those modifications, how we think about slots, checkpoints, epochs, slashing, finality, that kind of stuff. I will give you an intuition for the safety proof, but we'll not have time to dig deep into that. And we have a, yeah, I'll talk like that. Now I can hear myself. We have a modification of the FFG protocol in a paper. It is a draft. I really wish it was gonna be done by today. It's in the last round of edits and will be released on archive very soon. So hopefully this will help you read that paper when it comes out. So traditionally, when we think about blockchain protocols, we think about blocks and block heights. And every block you build on top of another block moves us into another block height. Original FFG protocol considered block heights and actions that validators could take with respect to block heights. These heights were divided into epochs and units of work would be done per epoch. So essentially a round of voting by all validators can happen per epoch. What you can do and not do to the protocol is also kind of defined within these epochs. We have a slight modification to how we think about a chain being built. Still we have blocks linked to each other, block by block, building a blockchain. But we have this notion of time called a slot, also embedded into this kind of overlaid on this structure. And so what can or cannot be done by a validator at any given time is with respect to a slot. So my duty to propose or to attest or do any type of these messages is with respect to a slot. And so someone shows up and they propose this at slot zero, this at slot one. No one showed up at slot two or maybe it didn't get propagated to the network. And so the proposer at slot three built upon slot two. But the state transition, the internal mechanics of the consensus protocol and other portions of the protocol are aware of this skip. And so instead of things being divided into strictly block heights, because this would just be a block height of three, although much more has transpired with respect to the duties and positions of the protocol. So instead epochs are divided into slots. Slots, epochs in actuality are on the order of like 64 slots. This is just illustrative. So this consensus protocol needs to be modified. FFG needs to be modified to work on top of this new kind of slot and epoch mechanism. Checkpoints. So in Casper FFG, we check point, we attempt to finalize checkpoints. Checkpoints are blocks per units of time. Here checkpoint might be this block, whereas there might be might that. But importantly, because we have this notion of skipped slots, we have to define what actually a block being at a checkpoint is and means. And so we call it in this paper coming out, it's called an Epic Boundary Block EBB. So in this fork of the chain, we have the Epic Boundary Block, which is the zeroth block in that epoch, is actually B. But in this fork of the chain, where a slot at slot 66 was built on top of A, the Epic Boundary Block is actually, it's the block at the zeroth slot or the most immediate block prior. So it's actually A. So in this version of the chain, this fork of the chain, if votes are happening and blocks are justified, if something was checkpointed or finalized, it would actually be A at this slot, because A can be transitioned through empty slots up until B exists. You're actually checkpointing and ultimately finalizing this tuple of A at the start of this slot. To illustrate that a little bit more, what I was implying is we have this notion of paired justification and paired finality. In the optimal case, we're always just say justifying the zeroth. In the optimal case, we're always justifying and finalizing if all the blocks existed all slots, and we can just always justify the zeroth one here. But instead, because we can have drop slots and skip slots, we have to have a notion of what we're actually, at a given epoch, what we're actually finalizing. And it turns out that we're finalizing this notion of a paired justification of a block at an epoch. So just to illustrate that a little bit more, we have a block at epoch one, the zeroth slot at 64. But then nothing happens, like everyone goes offline or some massive forking happens. And the next proposer, actually in epoch two at slot 129, builds on top of this 64. And so we go on in this epoch two to actually justify this epoch boundary, which ended up being a block from a prior epoch but at this slot. And so then we can go on to do further justifications and actually finalize this. And so what that does is, finality in this mechanism ends up being not only will this block, say this block is at this epoch is finalized, not only will this block never revert, but blocks that are lower than this epoch boundary slot can also, but built upon this can also, are also not valid. So for example, if someone built a block right here at slot 65, 66, but this was finalized, those are considered invalid and I don't consider them a part of my forechoice. So we have this notion of coupling blocks with an epoch in finality. So justification rules. Similarly to Casper FFG, the original protocol, the Genesis block at epoch zero is justified. And subsequently we have justified, again we have these pairs, these block epoch pairs, any justification pair from a source of a prior justification pair is justified. And when votes are cast, we're always specifying a source and a target. So here in this link that was created, this prior justification was the source and this new one is the target. So we create a justified link, I think it's called a super majority link, to create this chain of justified blocks and a subset of them, depending on the rules can be finalized. So finality rules. I know this isn't actually super meaningful, this is taken from the paper. The finality rules in the original Casper FFG paper were essentially you had to have two epochs that were sequentially justified where the lower of those two becomes finalized. Here we actually extend the finality rules to be a little bit, include a little bit more cases where we generalize it, the original case that I just described is called the K equals one case where we are sequentially justifying. But we can generalize this and add cases to the K equals N such that the rule becomes if we have a justification link and here's our justification link but all epochs contained within that justification link are also justified, then we can finalize the source of this link. So here we jump to but we justify the center so we can finalize here. Here we jump over three but we had justifications in the center and so we can finalize the base. And the intuition here is that here our justification and finality here is we can't double vote to try to finalize something exactly. And if we wanted to essentially like skirt the double vote and jump over, then we can't, there's an, and I'll get to that in a moment, we can't surround this vote to essentially jump over. And here we've like plugged to the holes. So we have a similar mechanism and we've plugged the hole and now we just need to prevent surrounds to avoid that finality issue. In the actual Ethereum 2 protocol, we only consider the K up to K equals two case because and the reason that we need this is because we allow attestations to be included during the epoch that they're created on chain up through the next epoch. And so you have a few, if things are not performing super optimally and attestations aren't being included on each epoch every time and they're included a little bit delayed, the state of the chain and what is actually justified in the view on chain can be delayed. And so we have these cases where if we open up, here's our K equals one cases, case two and case four, those are nice. But here we have these other K equals two cases where by extending the finality rules, we actually can capture finality in a few more cases. I think Justin will probably get into this a little bit later, but in the beacon state, we do have, these are the portions of the state that are related to finality. We track the last four epochs, whether we justified them and a few things about the checkpoints and with that data, we can track these cases and no finality on the case of on chain. Here's this massive nasty function that processes justifications and also processes finality based on those mechanisms. I was kind of alluding to these and I probably should have these earlier in the slides, but the things that keep this protocol safe is we prevent making double attestations, so voting for the same target within one epoch. And we, as I alluded, said earlier, we wanna prevent, given any source target link, here, here, here, to prevent essentially getting around the no double vote, we prevent a no surround, so I can't come from something earlier, jump over and then begin finalizing. Here's the actual code. We can probably get into this a little bit later, where we take attestation data, can't have the same target epoch, can't surround it. That's that for now. Sorry, that was so fast. We have a lot to cover today and that's like one of the more detailed components. Vitalik? All right, so I am going to be talking about, as you can guess, the LMD ghost fork choice rule. So, okay, where's the, I guess, so to start off, kind of what is LMD ghost? It's an adaptation of ghost, aka greedy heaviest observed subtree, which is a alternative proof of work fork choice rule that some academics, Yuletown Sample and Skinner Views are hard developed in 2014. And it basically takes the same principles as original ghost and try to kind of modify them slightly and fit them into a proof-of-stake context. So, to start the state kind of quick intro of what ghost itself is, basically the idea here is that if you imagine a network where there's a lot of network latency or there's normal network latency, network latency, but blocks are very fast. So say network latency is one second, you have a block coming every three seconds, then maybe something like a quarter of all the blocks are not going to be kind of conveniently in the same chain. Because you might just have a block get produced and before that block gets broadcasted and other block gets created at the same time. And the reason why this is bad, and so in Bitcoin they call this orphans or stales and Ethereum we call it uncles, the reason why it's bad is because if you imagine there's an attacker and the attacker is trying to do a 51% attack, so make an attack chain that's longer than the honest chain after some point, then the attacker has an advantage. Because the fork choice tries to look for the longest chain. So the honest chain, one out of every four blocks is not lengthening the chain, it's kind of a sister of some other block. But on the attack chain it's just the attacker and so everything works perfectly. So instead of needing to have 51% of the hash power, the attacker might only need to have 43% of the hash power and if network latency goes higher then the percentage drops more and more and as network latency approaches infinity the attacker can do a 51% attack with basically nothing. So ghost fixes this and the kind of philosophy behind this is that if you imagine a chain here where let's say block D was built on top of block B but then this chain was block C and then block E ended up winning. If you look at block D, like block D ultimately is still a vote for B, right? It may not be a part of correctly on the same chain but D is still voting for B. Whoever voted, whoever built D still thought that B is a good block to build a chain on. And so really you should be taking into account both D and E as blocks that support B's rightful position as part of the chain. So the way that ghost works is basically instead of looking at the longest chain you run this kind of iterative process where you start from the root. If some block only has one child you walk over to the child. If a block has multiple children then you selects the child whose tree of descendants is larger, right? So over here this block has no descendants so a total of one including itself. Over here this block has one, two, three, four, five, six descendants and so you go over here then you walk over here then you have another fork then this is the kind of heaviest subtree again and so E is the head. Now in this particular case, right? Like the longest chain rule and ghost agree but there are going to be many theoretical cases especially when there's an active attack being attempted when these extra blocks really do matter and they can save you from an attack. Now the way that ghost applies this to a proof of stake context is basically to start off it's very similar except instead of looking at all blocks it only looks at the block that is the most recent message submitted by that validator. So for example, in this case imagine you have a proof of stake chain, you have five validators, we'll call them Alice, Bob, Charlie, David and Evan and if you look at Alice, Alice created two blocks one over here and then one over here and Bob over here created this and this Charlie created these two, David created this one but then David might have dropped offline so he's got nothing over here so this is his most recent one and Evan created two blocks and that's his more recent one. So what we do first is we only look at the most recent blocks from each individual validator which basically means this, this, this, this and we run through the exact same process but only using those five blocks to count toward the weight, right? So over here you start from the root one child go over here, then over here this side you have a score of one on this side you have a score of one, two, three, four you go over here, then you go here and then score of one, score of one, score of two go here and here and that's the head. Now for now we're assuming a simple kind of model where what I call blocks and what I call messages are the same thing, right? And basically so in this case blocks are serving a kind of dual purpose one of them is that blocks are kind of entries in this graph structure and like your fork choice is walking along the blocks but the other role that blocks have is blocks are voting. Now in LMDGhost as we use it blocks and messages will be split and I'll get into this and why we do this later. So as in ghost start from the genesis walk up the tree at each branch choose the child that has more latest messages supporting it and keep going until you find the head. So why LMDGhost, right? So this here we're bringing back LMDGhost as more like it actually exists in Ethereum 2.0 and what we have here is we have blocks and votes aka attestations as kind of two separate concepts, right? So you have a block and then you have five votes you have a block or in reality this might be could be anywhere up to about 50,000 votes then you have a block you have a huge pile of votes a block a huge pile of votes. Now over here you might have two different competing blocks and this could be because one of the blocks just got delayed and the other and then the next proposal created a block and this block appeared this could be because the proposer was malicious and created two competing blocks we don't know. So now everyone who is voting chooses either this side or this side and if they choose this side then this block wins and then you keep on going, right? So the ghost fortress rule is going to be counting these, the attestations and specifically it counts latest attestations. So if this like if this is all inside of one epoch then there's no difference because everyone only votes once per epoch but if you imagine let's say this and this are two separate epochs then maybe this attestation and this attestation come from the same validator in which case you don't count this one but you count this one. Now in general if a chain is progressing then the if you vote once over here then the next vote that you make is going to be a descendant of this block, right? So most of the time validators aren't changing their opinion, they're extending their opinion. So if you made an attestation supporting this block then you're saying my opinion is this block is the best. If you then later make an attestation over here saying my opinion this block is the best then like you're not disagreeing with your opinion before, right? An opinion of this block is the best is also an opinion of this block is the best and this block is the best and this block is the best but before you did not have an opinion on these guys and now you do have an opinion on these guys. But maybe you made an attestation here then you make this attestation then you realize this chain wins and so at some point later you do change your mind. So like both of these things are possible. Now the reason why we do this is because this allows basically parallel confirmations, right? So in a kind of probabilistic fork to his rollers this general concept of confirmations like basically how many kind of units of information in favor of a block are there and how many do you want to wait for to achieve a certain degree of safety? In Bitcoin you would wait for six and that means waiting for six blocks. In Ethereum you're like you would wait for 12 so you would wait for 12 blocks but here you basically get like tens of thousands of attestations happening in parallel and so you get a very high assurance that a block is like overwhelmingly likely to be included in the chain pretty much like in the average case after one single slot. So the goal here basically is to give the same level of security after 10 seconds that a traditional proof of work chain would only give after minutes or hours. And because you have messages happening in parallel there's no way that all of them like you can even possibly make all of them form a chain if you try and so along this chain rule is not even sensible and LMDGhost is the obvious approach for how to take into account the information from all of these validators. So why LMDGhost, right? So one reason is that the longest chain rules cannot take into account information from parallel attestors and ghost based rules do. Another interesting property that LMDGhost has is it has a property that the minority can never beat the majority regardless of how many messages they sign, right? So for example, suppose that you have like a structure that looks like this and then you have these four validators that are all on this chain and these four validators kind of all agree that this chain is best and you have this one kind of lone attacker. Now let's suppose that ACD and E just get knocked off line and they disappear. Then if you use traditional ghost, right? Eventually B could just make blocks, B could make blocks, B could make blocks and eventually B's chain would be longer and B would win. But in LMDGhost, if all four of these guys get knocked off line, then B could keep on making blocks and B could keep on making blocks for a long time but this chain is still going to continue to be the winning chain because it's not about the quantity of messages, it's about the quantity of distinct supporters of one chain versus the other. And if these four validators don't make any new messages, then the system assumes that they're just supporting these four things forever. So this kind of insight, this idea that unlike longest chain rules, LMDGhost has this mechanism where if it gets into this configuration, that you just can't like move over to this chain pretty much no matter how long B tries. This is actually the basis of a CBC Casper, which is something that we're interested in switching to for the longer term. But so this is one reason why LMD, and a second reason why LMDGhost is interesting. Now let's look at some edge cases of LMDGhost and specifically LMDGhost's interaction with the finality gadget, right? So saved message attacks are one kind of example of an edge case first, right? So basically here's the intuition behind the saved message attack. So a validator is allowed to make a maximum of one attestation in each epoch. And the way you enforce that is that every attestation has to come with a tag that says, this is the epoch I come from. And if you send two distinct attestations with the same epoch tag, then you can get slashed for it. Now, a thing that you can do is you can say, well, I'm gonna just drop offline for any epochs. And now I have antihistorical tags that I've unused. And then within one epoch, I can just like send all of those messages with all of those tags all at once. So a worst case, traditional ghost like is not very good at handling this kind of situation. LMD ghost is better because at least those end votes do not stack on top of each other. But LMD ghost is still imperfect because with this kind of mechanism, well, you have the ability to kind of influence the, basically make the fork choice go back and forth, right? You can say, I vote for you. Now I vote for you. Now I vote for you. Now I vote for you. And you can repeat this a bunch of times in a single epoch. And this could be used for some attacks to delay liveness and delay finality. So a proposed solution here is FMD ghost, which basically says clients only look at messages, kind of tagged with the current or previous epoch. And this prevents kind of saving up more than one, more than two epochs from being useful for any kind of attack. Interaction between LMD ghost and FFG. So we use both LMD ghost and the FFG in this kind of combined way. Basically, LMD ghost provides block by block consensus and FFG provides finality. And you do have to kind of glue these algorithms together, right? So our actual fork choice rule basically says, first, select the last finalized block you were aware of. And at the beginning, this is the genesis. Eventually you become aware of new finalized blocks. Second, you select the highest epoch, the most recent justified block, that's a descendant of the last finalized block. And then third, starting from the last justified block, you run LMD ghost to find the head. So it's basically running FFG first to figure out the last justified block and then running LMD ghost from there to find what the head is. Now, this does open room for certain kinds of like bounce attacks. So basically the issue here is that you might have a situation where you have one block on one side and you have a block that's winning the fork choice rule, but then you have some block over here with 65% of the votes. And then the attacker has a few votes and then the attacker waits until some block here gets to 65%. And then the attacker releases a few votes here. This block becomes justified. And so suddenly the fork choice rule kind of flips over from here to here. And then people build over here. And then when this block starts getting close to it to 65%, then this blocks at 65%, the attacker releases another 5%. Now this is justified. Now this halts, everyone moves over here. And so you can kind of bounce the chain around. And there's a couple of solutions that are intended to mitigate this kind of attack. One of them has to, basically, they all have to do with delaying when new justified epochs have an effect. So one of them would say, in most cases, delay switching until an epoch boundary. So either finalization happens close to the start of an epoch or it happens much late or you wait until the end. Or another idea is that a checkpoint can only be used as part of the fork choice until, basically, the height since the last justified epoch multiplies by three. And this ensures that there's kind of periods of three epochs eventually within which the fork choice is not going to change and you have the opportunity to finalize something. LMTGhost is not really more complicated than this. It basically is you have a block. And if that block has multiple children to figure out whether you go to one child or the other child, you choose the child that has the most validators that most recently supported that block. And you can use that by itself as a fork choice rule. And you have a bunch of validators that are making these messages in parallel. And each of those messages contributes to the fork choice rule, allowing the chain to kind of soft converge very quickly. And after those messages are also simultaneously votes in FFG. And so after about one epoch, the block gets justified. And then which kind of entrenchs it in the fork choice further. And then after one more epoch, it gets finalized. Hi, I'm Descartes Feist. And I'm going to talk about randomness in Ethereum 2.0. Oh, how do I? Is there? Oh, OK. OK. So basically, I quickly want to summarize why randomness is such an important problem in any kind of proof-of-stake protocol and ETH 2.0. I'm going to talk about RANDOW, which is our first and rudinary source of randomness. Quickly also go into the issues that this has and why in the final protocol in a few years, we're going to use variable delay functions to improve this source of randomness. So why is it so important to have a good source of randomness and proof-of-stake? Well, so we need to do several things randomly. And we don't have this kind of proof-of-work randomness that we have in proof-of-work chains anymore. So we need to select proposers. We need to select committees. And we need also as an extension like some contracts on chain want to use randomness. And we need to provide randomness for these as well. And so for each of these, good randomness required for proposers because we need to be fair. Like we distribute rewards to them. And also, we want to protect against denial of service attacks. But it's especially important for the committees that attest to the chart chains because if the committees are not honest, the beacon chain cannot check the state transitions of all the chart chains because that's the whole point of sharding that you don't have this huge load on the beacon chain. But that means we have to trust these committees to be honest. And we can remedy against an incorrect vote by a comment using ford proofs. But we don't want to have too many of these. And finally, we also want that smart contracts can use randomness. And some applications like lotteries or so might attach a huge value to random numbers. And if you can somehow attack them, that might degrade the randomness in the whole system. And then a bit further on the committees, why it is so important. This is a very central issue there. If the problem is that we want to minimize the probability for having a dishonest committee, and a bad committee could potentially create a link to an invalid or non-existent block. And a fraud proof would mean that you have to revert the state of the beacon chain until when that happened. So the probability, if you have a committee size of 128, then it's quite small, 5 times 10 to the minus 15. But this can, of course, completely change as soon as someone can bias the randomness that we're using. So the idea behind RAN, though, is let's say we have N people who want to generate a random number. Everyone goes into a room. Everyone contributes one random value, xi. You compute the x of all these values. So that sounds like it could generate good randomness. But the problem with that is the last player can just change their value. After they have seen everyone else's value, and then they change their choice and get whatever they want. So let's improve this. So with commit reveal, we can start the same. They all go into a room. And they each commit to their value xi by telling everyone the hash. And then they reveal their value. We compute x all of those values again. So in this case, this cannot be manipulated because we force everyone to reveal their value because they're all in the same room. But in the real world, the problem is anyone can stop this process by not actually revealing their preimage, and then we can't compute the xor. So RAN now basically builds on this idea. So we make everyone who is a validator has already committed to something. In our case, it's actually their signature because we have a deterministic signature scheme, BLS. We can just use a signature as a reveal. Everyone can check that this was the correct signature. So you sign the epoch, denoted by e here, and that's the reveals. And then what can, of course, happen is that someone does not produce a block, and so they haven't revealed their randomness. And then we cannot include them in this xor, yeah. OK, so that is the basic process of RAN now. And this is used as a first instance to generate randomness, ETH 2.0. It only has one problem, basically, that whoever is last in an epoch can just choose to not produce a block. So basically what this means is if I don't like the result of whatever this reveal that I'm going to contribute is, like I can compute whatever the mix would be, then I just don't contribute. And it's as if I would get another roll of dice, essentially. And that can be worse if you control several validators in a row, of course, then more bias is possible. So there was one nice analysis by Vitalik on ETH research where he showed that if you just have a longest chain fork choice rule, then with just 36% of the stake, you can actually completely take over a chain that chooses their blocks based on RAN now, their block producers. Right, so what RAN now is sort of our first source of randomness in the beginning, it's obviously, as I've shown, not perfect, but at the moment, we don't have anything better. So like in the future, what we're going to build is honestly like so-called very favorable delay function. The idea is that you have a function, f of x, that produces a result, y, and a proof pi, such that computing this function takes a long serial time, right? So you can't speed it up by having many processors. It's like you have to run a serial on one processor. And then checking the result, that this result, y, is correct. Using the proof pi is fast. And so one example, which is actually the one that we are very likely going to end up using, is this squaring modulo and RSA modulo. So power taking many squares of x modulo m, where m is p times q, is like one way to construct such a very favorable delay function. And by using the VDF output on RAN now, VDF, sorry, the VDF on RAN now, so using the RAN now rather as input for the VDF, the last revealer loses the advantage. And the way that works can be illustrated here. So basically what happens is you have this last block of an epoch that starts the VDF computation. And so later the VDF computation will have an output and that will be used as randomness on chain. But by that time, there will already be many more blocks. So the last revealer wouldn't have a chance to know how they could have influenced by not revealing their reveal. Yeah, that's it. Thank you. So my next talk is BLS signature and aggregation. So the goal is to provide a minimal set of knowledge that make developers' lives easier. So the goal is to introduce the signature scheme on a tab, but it is built on the pairing operations and curve points groups. And they all built on the fields operations under a button. So yeah, so a takeaway here is that when we say BLS, we might talk about two things. One is the BLS signature aggregation scheme and the other is the curve parameter that chosen by Z-Cache is called BLS-12-381 curve. There are different authors. So first I'll give a primer how FQ operation works. So you have a field number like field 13. I mean, it is operate on the prime number 13. And then you can define addition, subtraction, multiplication, and division with this number. And note that the outputs are always between zero and Q minus one. So you have no matter how complex a commutation if not on FQ, you always get the same data size. And you can use the PYECC library to try different parameter fields. And here's another primer on elliptic curves. When we say elliptic curves, it's an equation in this form. It's a Y square and X cubic. When you define it on a real number, when the X, Y is a real number, you'll see a curve shape. But if you define it on a finite field, it looks like scattered. So a point on a curve has X and Y coordinates. Usually you need to send a point over the network. So you can compress it by only specify X and only one piece to infer which Y you are talking about because given X, the Y is symmetric. It's only two points on a graph. So and you can add a point to another point. So you can define additions of the points on the curve. And it is usually a line. When you want to add P to Q, you draw a straight line and intercept the curve on another point and then mirror it down to find the P plus Q. So when we have the addition of the point, you can define multiplication for a point. You can multiply a point like by 10 times, like by adding it 10 times. And this is a hard math problem that you can, given you have P and Q, and you cannot find 10 easily, it's really hard. So you can hide bodies in here. It's actually a secret key, yeah. And so here it, let's talk about the BLS12381 specifications. It has a small point called G1 and a big point called G2. If you remember the first pyramid, they have different prime fields on the button. The small point has 48 bytes and the larger one has 96 bytes. They have different elliptic curve defined. And note that, so the FQ2 is like a complex number. So it's double the size of the small curve and there's an imaginary number i there. And the G1 and G2 is the generator point that has a specific X and Y coordinates specified for the curve. So color code is them. So don't get lost with G1 and G2. So a pairing function. Pairing function is a function that takes a G1 group and G2 group and the magic of the pairing function is that when you like multiply a constant A or B to the group elements and you can take the A, B to the shoulders and that means that you can take the A, B into the first one and the second one. And that's rule number one. And rule number two is that when you are adding points, like you are adding points in G1, adding points in G2, you can like distribute them and spread them out. So with this construction, we can introduce how the signature scheme is built on this type of construction. And if you look at the pairing function, there's some money that's over there because it's expensive to run. So it's a computational heavily function. So we try to minimize the times you try to run pairings. So here's the BLS signature scheme. Private key is an integer, I think it's like 1205. And to get a public key, you just multiply the private key with G1 point and then to sign a message, you need to hash the message to a group of points and that's when you heard people saying the BLS standardization effort, they are standardized the way we hash the message to a group of points. And then you multiply your private key to the message and you get a signature here. And to verify the signature and the public key and the message, you use two pairing functions. And the proof is really simple. It's like you write a G1 and S on the left-hand side and then you can move the K from the right to the left and you get the expression of the right-hand side. So the most powerful thing about BLS signature scheme is it can aggregate signatures and public keys. So this is just a curve point addition which we introduced before. And this is, you can aggregate three signatures and you can aggregate as many signatures as you want and then verify using two pairing functions to verify. So yeah, so the proof is the exercise. But while we are looking, so while you are looking at here, you are looking at a scalability. So this is how you can aggregate many validators messages. So here I present an example that kindly borrowed from the Chansave's simple theorize.com. So it is a testation message and it has signatures, it looks like this. And then I hide the data part and you will see an aggregation piece looks like this. And this is actually a record, this says all validator five is your signature included in this bunch of aggregation. And then a client will use that information to look up public keys and to verify this signatures. So yeah, that's how BLS signatures works. Hopefully that helps you. Thank you very much. Thank you, Vitalik, Dhankar, Chacheng. Next up was a break, but we don't have time for that because we're definitely running late already. We might have some time for questions at the end, but we're just gonna keep driving. Next up, Xiaowei's gonna talk about, we're moving into, these are some of the building blocks to help you understand some of the underlying concepts that we use to construct these things. And now we're gonna move into the more concrete instantiation of the protocol, Xiaowei. So hello everyone, I'm sorry I'm pretty sick right now. And konnichiwa. So my topic is the life of the an Ethereum big-chain validator. So this is the outline of this topic. So we will talk about the two main factors to define the validator state. And then we will talk about the entry and exit queues and then the life cycle of this. So the current spec defines this four status. There are the activation eligibility, which is the preparing stage before it actually be activated. And then the second one is the state that the validator is alive, active and is helped to validate the state, the blockchain. And then when the validator, they can choose to exit, then after a while they will get into the exit status. And finally, the big-chain validator will get into the withdrawable state. So here we need to know is that in the big-chain phase zero, we only have withdrawable state. There's the actually withdraw state will be introduced in the phase two where we have EE's, the extrusion environments. Then that's when the validator can actually withdraw their deposit to the EE, to the transactions. Okay, so here is the validator's data structure from the big-chain state. So the validator has this information we need to know it's inside the big-chain state. So here I highlight is the status epochs. So we can see this four different status epochs are defined here. And initially the state epoch is set into the unsigned integer of 64 bits, which is the maximum number of here. So the reason is that we haven't defined this status, we haven't defined when this status will be happened. So we set a very long, very long epoch. Okay, so Stanley explained it that we define the slot and epoch. So each epoch consists of 64 slots and each slot is six seconds. So we can use this epoch number as the timestamp here. So we define that we can see the epoch as a timeline. And if the validator has been set to with activation eligibility epoch 100 and also the activation epoch 200, then when the time actually between two of these two epochs for example, 150, the validator is not activate yet. And when after 200 epochs, the validator will be actually active it, activate, okay. And so we introduced the state epochs. Now there's another flag, it's another factor that effect the big and chance validator state, which is called the slashed, which I think Danny will talk about how the validator actually be slashed in the later session. So in here, you only need to know is that a slashed validator will be forced to exit it, which is very reasonable because they might be something that we have misbehave. So they will be forced to exit to be, yeah. Okay, then I'm going to talk about the red limiting queues. Before that, I want to introduce the week subjectivity. This is a feature of the profile stake blockchain. And so if you are a new validators regarding online and you can only join, you can only trust your peers around you. And also if you get offline and then be back again, you're in the same situations that you can only trust your neighborhood. So then how long you can be offline, it's introduced, it depends on how long it takes for the attacker to withdraw their stake. Then that's why the exit rate is important here. And also the time period we set for validator from the time it initiate exit operation to it's actually be exited, that period is also important. So in the Bitcoin trend, we have two queues, which is one is the entry queue and another one is the exit queue. So those two queues are, the reason why we use queue is that we need the network also the, you know the network and also the validator set to be as stable as it can be. So we won't allow that in a short period, a lot of the validator they initiate exit, then the validator state will be like you can't select enough and the same state from the validator state. And also it ensure that the finality guarantees is still remain between the two chains as long as the validator blocks off often enough. So we will see how the queue is happened. So this is the charm rate functions. We can see that for each epoch, the maximum number of a validator can join or to exit it define is based on the current active validator count. So you can see that it, the number is set us this and the charm rate quotient is set to like 65,000. So as you can see the number, you can estimate that how many validator can be seen at the same time. And the life cycle, so at the beginning, the validator they will make a deposit on the is one chain and is one chain is the endpoint to get led you get from the is one chain to is two world. So when the validator, they make a deposit of the 32 ether to this deposit contract. You also watch the deposit counter status from the big engine. So, so this is a diagram here and we will show the four diagram later. So at the beginning, the deposit validator is here and we will check online on the big engine logic that is the validator has enough balance and ready to activate. And after four epochs and also it's a first inverse of a queue we define the charm rate earlier like in that function. So only the small amount of the validator can join the validator state. This is the entry queue here. Okay, and then we, sorry, let's take a look the button diagram. So from the, when the validator is activated and there's two possible in this road is maybe the validator doesn't, he got penalized and times my times and the validator balance is insufficient. So that in that time, the validator will be ejected and another option is that the validator can volunteer voluntary exit is the second road. So both road will get into, will push the validator in the exit queue here. So it's also, it has to wait for at least four epochs and also it has to be enough room for this validator to exit. And then we call that this validator is in the state of, it's un-slashed but it's excited. Okay, so, and then after 27 hours it will be withdrawable. 27 hours it will be withdrawable. The reason why we remain, we set a delay time between the exited and the withdrawable. There are three most reasons. One is that the validator after it's exited it's still possible that it will get a slashed which is the this road. And also maybe the validator still have chance to get some small amount of reward before it actually exit. And also it will provide the proof of custody challenge time to be made during this period. Sorry, the proof of custody thing is the phase one thing that we will prepare this. And then let's take a look the top diagram. So the activated validator might got slashed and after it got slashed it will wait a short delay until they can initiate a withdrawal operation. And then it got into that, the slashed and exited status. And after at least 36 days it will be withdrawable. So it's very, you can see that the slashed validator has to wait for more time. It's got punished because there either is locked in the beacon chain. That's the full picture that you can see how the validator be switched between the status. Okay, okay, you know. Okay, so I'm gonna talk about the state transition function of the beacon chain. And I encourage you to just read the specs. It's actually not too bad. It's only roughly 1,000 lines of code, quite readable. I mean, the link up there is quite long. So I've also shortened it. Okay, just to give a bit of context, you all know this, we have the beacon chain which is kind of the system chain of the whole system, the spine, all the shards kind of connect to it via cross links, the shards come later and I'm gonna focus on the beacon chain. Okay, so we have slots, we know this. We have blocks in the beacon chain and we also have state. And basically state advances for every block. And so what I'd like to focus on today is what is a block, what is state and what is the state transition. So this is the state transition function and it takes a state, a block, return the post state or an error. So we can just say, okay, this block is just invalid and I'm gonna abort there. And if it's valid, I'll give you the next version of the state. And in addition to slots, we have epochs. So 64 slots per epoch, which is 6.4 minutes. And epochs are kind of important from the point of view of the state transition function because this is where some of the kind of accounting happens at the epoch boundaries. So you have these state transition functions on a per block basis. And then you have also state transition functions that happen on a per epoch basis. Okay, so this is kind of a high level overview of the various components of the beacon chain. And you can kind of think of it as an organism with various organs that connect to each of us. You may have your lungs, the heart, whatever, all these things. And here they each provide kind of vital functions. So you have, for example, randomness. You have the registry which keeps track of the validators. You have finality down there. You need deposits, of course, if you want proof of stake. And yeah, everything is kind of flat. So it's not like a layered system. Well, we do have the beacon chain and the shard data and the shard state, but within the beacon chain is actually quite flat and horizontal. And each of these things can be swapped out. It's very modular. And so one of the roles of a designer is kind of to pick the best ingredients there and try and put them together in a harmonious system. And in terms of color coding, I'm gonna use blue for the state and green for the blocks. Okay, so what is the state? Let's try and actually read the code chunk by chunk. So try and understand what is in there. So we have these three properties which are basically related to versioning in space and time. So the Genesis time tells you when the state was created. Slot kind of gives you a more granular notion of how much time has processed. And then the fork will kind of be versioning in space as opposed to versioning in time. So every time we do a hard fork, we're gonna update this. And one of the impacts that has is on the way that signatures are verified. Okay, so that's all basic stuff. More kind of basic infrastructure is the notion of roots. So we have state roots and block roots which is the equivalent to what is traditionally called a block hash or an estate roots. And this is just a way of cryptographically keeping track of the various objects that we're working with and representing them in a way which is friendly to work with. So the reason why we're working with roots as opposed to hashes is because the objects that we're working with are structured and we have this notion of a tree of hashes. And so if you're interested in a very specific property of an object, then you can access it via a merkle path for just a specific object as opposed to having to need the whole object. And then we have kind of the economic link to if one. So we need deposits into if two, the deposits come from if one. And so the if two needs to be aware of if one and this is the state that is going to help us be aware of if one. And then we have the registry. This is going to be probably the most important part of the state. So it's just the data structure that keeps track of all the validators and it's by far the largest part of the state. So this might be hundreds of megabytes let's say and the rest might be tiny. Just a few megabytes. We have some state related to shuffling and randomness. We have state related to slashings. We have state related to attestations. So attestations is basically what the validators need to do. That's the work that they have to do. It helps advance the system and make it move forward. And this is what the validators get paid for. And we have the cross links, which is the link to the various shards. And then finally, we have the finality mechanism. And in terms of concretely, what are the modules that we've chosen here? Some of the keywords. So we have FFG for finality. We have BLS signatures for the attestations. Ghosts for the folk choice rule. We have Randall for randomness. Swap or not for the shuffling. We have shout to 56 as a hash function. And we have tree hashing as the way that we we mercilize the objects. Okay, so it all fits in the slide. This is the state. Okay, so let's try and understand what's in a block. So in the block, we kind of have the header. This is the header with things you'd expect. Like the slot, which is equivalent of the height and other systems. The parent root, which could be the parent hash and then other system, state root. The signature, which is going to be the BLS signature. And then the body, which is going to be like the more important part of the block. Okay, so let's look at the body. What's in there? So we have fields that are relevant to two of these systems. One is the randomness system, making progress Randall. And then the other one is related to the link to if one. So trying to make that progress through voting. And then we have a graffiti, which is just any data. So this is kind of encouraging people to innovate with putting data on the beacon chain. And then we have the equivalent of a transaction. So normally in a block, you'd put transactions. Here it's a bit different because the transactions are not user transactions. They're system level transactions. So we call them operations. And we have these five different operations. Two are related to slashing. One is the attestations, which is where like the real work should happen. And then you have transactions which are related to people coming in of the system, leaving the system or moving funds within the system. So these are just registry operations. The attestation is probably the most important. So the block contains attestations and that's what moves the system forward. Okay, so I wanna try and give you the really high level of the state transition function. And I'm gonna cheat a little bit and I'm going to only present what I call the honest state transition function. And by honest, I mean that I'm going to assume that the block has been honestly constructed. So it's a valid block. And what this means in practice is that I'm gonna try and tell you what are the mutations that the block will make on the state. But what I'm not gonna tell you about is all the various ways that the state transition function can throw an error. So all the various assertions. And it turns out that the bulk of the complexity is actually here. But here it's really boring stuff. It's like, oh, verify that the signature is valid. Verify that if you're transferring funds, you have enough funds. I'm so excited. I think most of the insight actually comes here. Just to try and understand how the state evolves. This is where it all happens. Okay, and so I've kind of subdivided the modules into three columns. So you have the scaffolding, which is relevant pretty much for all block chains, time, roots, and randomness. And then you have the registry, which is one unique part to proof of stake. So you have things like deposits and exits. And then you have a final bit, which is technically optional, but it's still extremely powerful related to finalization and cross links, if you want sharding. And also ghost through the attestations. Okay, so let's go through these components one by one. So blue is the state, green is the blocks. So what happens at every slot? The slot number will increment. So you read the slot value and you just increment it. Nothing complicated here. And the genesis time and the fork don't change. I mean, the fork will change kind of at the social consensus layer, but it doesn't change within the state transition function. Then you have kind of the header part of the block in green, which gets saved into data structure and also gets a miracle lies into block root. So the beacon chain is kind of aware of its block roots in the past, the recent block roots and also recent state roots. And it will basically build from these so-called historical roots. So this is a historical accumulator, which allows you to go back in time arbitrarily far and provide a witness to any part of the state or any part of a block. And one of the cool properties of this accumulator is that the witnesses don't change over time. So if you have a statement saying, I know that at slot 1,000, the balance of this validator was this amount, well, whatever proof you had, the miracle path will remain valid forever. And then you have the randomness, which was explained by Danchat, and this is the basic scar folding. And the way that the randomness moves forward is that in every block, the green part, you Xorin the reveal into the Randall mix. So the Randall mix just keeps on mixing in entropy and this entropy is kept in the state and it's sampled every epoch to do shuffling. Okay, then we have the registry with the validators. And one of the things that we've done here as an optimization is we've decoupled the balances from the rest of the validator fields. And the reason is that the validator fields will change very infrequently, whereas the balances will change very frequently. So there's a high overhead to constantly be miracleizing what changes fast. And so we wanna segregate in one place everything that changes fast. And this was covered by Xiaowei. Let's have a quick deep dive as to what's inside these validators. So the validators is a list of validator objects. This is what the validator object looks like. It has the public key as you'd expect. This is kind of interesting. It's the hash of another public key. So this other public key is meant to be your withdrawal key that you keep in cold storage. So as a validator you have two keys. You kind of have a hot one that you use on a day-to-day basis to sign your attestations. And then you have a cold one which you use for withdrawals and transfers. And so if your validator node which is online gets hacked then the hacker cannot steal your funds. So that provides a nice level of protection for you. Okay, all of this was covered by Xiaowei. Okay, so which parts of the systems interact with the registry? Well, we have the deposits. So every block in the beacon chain will contain a list of deposits here. And these deposits will get processed and then that will create validator entries in the registry. But then the beacon chain needs to know what is a valid deposit? And for that it needs to be aware of the EF1 chain. So how does that work? So in every block the block proposal will have EF1 data saying this is what I think is the block hash of EF1 around which we need to come to consensus. And so this data gets stored in the state as votes. And then at a certain epoch boundary these votes are counted. And if there is a majority of votes for specific EF1 data, this specific EF1 data is updated in the state as the latest snapshot of EF1. And the idea here basically is honest majority sampling. So we have a large number of validators, let's say a million validators. We have this honest assumption that at least two thirds of the active validators are honest. We're gonna sample a thousand of them and the way that the sampling works is over a thousand sequential slots. And if this randomness is good enough then we know with high probability that at least one half of the EF1 voting committee will be honest. The voting committee being these 1,000 block proposals. And so that means that if we have at least half of the votes here vote for a given piece of data then that piece of data will be representative of reality, representative of EF1. And by the way, this honest majority idea is really used for cross-lexit as well. Okay, so now I guess comes the more interesting stuff. So the attestations. This is the work that the validators have to do. So what is inside an attestation? So this is kind of the header of the attestation. It has a signature. And here it's important that we have the BLS because we have different validators all part of the same committee that will be signing the same attestation. And the way that the aggregation works is that we're gonna specify in the aggregation bits here which validators have participated in the specific aggregation. So we have a committee, let's say of a thousand validators they're ordered and this bit list is gonna be a thousand bits each zero, one, one indicating that the A validator was included in the aggregate signature here for this attestation. And then we have the data which is gonna be the body of the attestation which is more interesting. And it basically has three parts. The attestation data, when you make an attestation as a validator you're making three votes all in one go. And this is part of the reason why this is part of what I mean by harmoniously connecting the various elements. This is one place where we've done it. So when you make an attestation you're voting for a past beacon block and that's going to count towards the folk choice rule LMD ghost. But at the same time you're making a finality vote, an FFG vote. And so you're gonna be voting for a source target pair. And so that's gonna lead to justifications and finality. And in addition, you're voting for a cross link. So as a committee you assign to a specific shard you're meant to download a chunk of the shard, run it, validate it and if it's good then make a cross link. So vote for that specific chunk of shard. So I mean, this was covered by Danny. Basically what is a checkpoint here in the FFG vote is nothing more than a pair of an epoch and a hash. And what is a cross link? Well, a cross link is a small segment of a shard. So how do you represent that? Well, you need the shard number. You're gonna need the start and the end epoch of this chunk of shard that you're crosslinking. And you're gonna need the data route, which is going to represent, this is gonna be the mercilization of the chunk that you're crosslinking. Okay, so in green we have the attestations in the blocks. And then on a slot by slot basis they get stored into the state. And they can either get stored as previous epoch or current epoch attestations. So if your attestation is stale, it's very old, it's older than the previous epoch, then we don't even bother saving it in state. We only record the current and previous epoch attestations. And then here we have the beginnings of the finality mechanism. So the finality mechanism works on an epoch by epoch basis, hence the blue arrow. And what it does is that it is going to look at the cache of recent attestations and then count the votes. And if we get to this two-thirds threshold, then we're going to record that we have met the two-thirds threshold by modifying the justification bit and also by potentially advancing the last and the previous justified epoch. And then if we have this so-called finality patterns of which we have four as explained by Danny, then we're also going to advance the finalized checkpoint. So the beacon chain is going to be aware of its last finalized checkpoint. And as part of this, part of the safety of this mechanism is the idea of slashing the attestors that make bad votes. So how does this work? Well, we have fraud proofs. We have proofs that attestors have been doing a bad job. These are included in blocks in green. And they're going to have an immediate effect, a green arrow on the registry. So if someone has done slasherable behavior, they will be marked as slashed immediately in the registry. And in addition to that, we keep track of the total amount of if that was slashed. And the reason we do that is because we want to... We want to have a mechanism whereby if only a few attestors do bad stuff, for example, then they're not really jeopardizing the system. So we don't want to penalize them too strongly. But if lots of people are doing bad stuff, then the system is at risk. And so we want to penalize everyone to a large extent. And so this variable here in the state in blue is keeping track of how much bad behavior has happened in the recent past. And then we have the crosslinks. And this works pretty much exactly... Well, very similar to... Yeah, to this. So... No, it doesn't work the same. Oh no, this is different, sorry. So this is basically is a mapping from shard to crosslink. And it records the previous crosslinks or the current crosslinks. And basically on every epoch, hence the blue arrow, we save the current crosslinks in the previous crosslinks. And that basically allows for the beacon chain to be aware of recent crosslinks across all the shards and hence for every shard to be aware through the beacon chain of the crosslinks on every other shard. And yeah, this is pretty much it. So we have the full kind of honest state transition function at a high level. Again, like made out of these modules which are replaceable, which talk to each other. And at the end of the day, there isn't that much happening. There's basic scaffolding. There's maintenance of the registry, which is kind of obvious stuff. And then there's finality, which is a very important gadget. And then we have this crosslinking gadget for sharding. Okay, great. So that's what we're looking to launch in phase zero, but what comes afterwards? We have transfers which are slated to come likely in phase one, which will basically allow for if on the beacon chain to be sellable and that will create a market for this if. But we also have a bunch of security upgrades that we're looking to do. And each of these security upgrades are optional, but they're very nice to have. And one of the reasons why we're not putting them up front is because they all have fancy constructions and fancy cryptography. So one of them is Cassidy proofs, which we're looking to do in phase one. We have the idea of secret proposals where instead of knowing upfront who the next proposes will be, we can have a system whereby we don't know which proposes will be invited to create beacon blocks. And that is a way to protect ourselves against denial of service attacks. Because if you know who the next proposes will be, then you can target them at the networking layer. We have VDF randomness upgrade, which might come in phase two, might come later. And then we have data availability proofs. And I mean, one thing I didn't mention here actually is light clients and infrastructure. So we wanna make it very easy for to have a beacon chain like clients and this infrastructure will come in phase one. And we also have a CBC Casper, which I have understood here. And then kind of two other ways in which we may upgrade the beacon chain is to have multi-hashing. So instead of having one single hash function, SHA-256, we might add native support for another hash function, one which could be friendly to stocks and to stocks. And later down the line, we're also looking to change the various cryptographic primitives which are not quantum secure and change them over to quantum secure equivalents. And so this includes BLS-12381, which is not quantum secure, and also the RSA-based VDF, which is not quantum secure. So I guess that will keep us busy for a few years. And yeah, that's it. Thanks. Thank you, Justin. I'm gonna close it out talking about the validator duties in, or as age says, duties. I'm talking about the validator duties in phase zero. Some of this, if you actually look at the phase zero state transition spec, the validity conditions that Justin didn't go into kind of imply what a validator should be doing, but to make that explicit, we have the separate doc, and when I was compiling my slides, I thought that was a good idea, so I made a QR code and a tiny URL. So this document, which I'll go over some of the core components of it, explain what and when a validator should be doing with respect to the beacon chain. The initial part of this document talks about creating public keys, initiating deposits, and a lot of that was kind of covered in shallways and just will be out of scope for this portion. So the two main things that you do in phase zero is you propose blocks and you attest to blocks. In subsequent phases, you would do similar activity, but also on chart chains. So we have some stuff going on pretty much at any given slot. You can say, am I the proposer? And if you are, make a block. This is independent of your attestation committee, your cross-link committee assignment, and it's noble within the epoch of assignment. You don't have a look ahead in a prior epoch, which is also different from the committee assignments. And the action of proposing a block is at the initial start of a slot. So slot 10 starts, I make my block, I give it to the world. And this is, your proposal is publicly known. And so as Justin talked about, secret leader election is something that we're looking into. Computing the proposer index is essentially taking all the validators, shuffling them and using some of the recent randomness along with their effective balance to sample them. Great, so the chance of you being selected is also proportional to your balance. Most validators in normal operating conditions would have an effective balance, as we call, which is capped out at the max, which is capped out at 32. So in many situations, there'd be an equivalent chance of being selected for proposal. So Justin, already talked about these data structures, the beacon block, beacon block body. What I do is I reveal my Randall, which is a signature upon the epoch. I go and find my ETH1 data, which I'll show you about in a second. I sign some graffiti, maybe I'll vote on some proposal or I'll say things about who I am. And I fill in any of these operations. Most importantly, I'm gathering up attestations because that's how I make my block proposal worth it and profitable. By including high value attestations, which are attestations that have not yet been included, attestations that are highly aggregated, so it has a lot of participants in them, and attestations that are more recent. By optimizing those things, I can optimize my reward. I think you get one eighth of the reward that was given to attestations. For attestations, you receive yourself. So in general, by proposing, by being a good proposer, over time you're increasing your reward by about one eighth. Deposits, deposits, we come to consensus on the ETH1 data, which is some past ETH block and the deposit contract deposit route. And this deposit route allows us to process deposits in order of making a proof against that route, a Merkel proof. And this number max deposits is, I have to, by the rules of the protocol, include deposits, any unprocessed deposits up into that max deposit. So I think that's the number 16. So if there were 32 deposits that are unprocessed, I have to include 16. If there were two, I include the two, and then no more. If there's zero, zero. I also, voluntary exits might be flying around on the network. I can pick those up, put them in. I'm kind of naturally incentivized to put those in, because the fewer validators that exist, the function's a little bit dynamic. But I get a little bit less overhead, and it's nice. Proposer slashings and attester slashings, I might also choose to be, I might be policing the network, looking for nefarious activity. And if I find these things, submit proof of them, I get a portion of the, I get a small reward, a small amount of what was slashed. So we can look into a little bit more into this eth1 data. Essentially what this function does is like what I call a pile on vote. Like once I see, we divide the voting period, the eth1 voting period into a number of epochs. And if I see good votes for eth1 data, I just pick that vote and I vote on it. Some of the mechanism in here is to prevent me piling on to votes that were a little bit stale. So early on in the period, if it's less than the integer square root of the slots per eth1 data period, I will not pile on my vote if it looks like someone was putting in some stale information. But otherwise I just vote on what the majority is and is valid. If there's no votes, the default, I go get my own eth1 data. And what we do in this initial release is that we follow the eth1 chain by like a thousand blocks to be safe. This is not, the eth1, the beacon chain cannot handle if there were a reorg past that and the eth1 chain knows nothing about the beacon chain. So this is an assumed safe distance to follow the chain. And so you do have like an induced latency on handling deposits because of that. I'm not certain what the most deep reorg is ever been an eth1, but it's not even on the order of like 100. So assumed to be safe unless there was an attack. If there was an attack, maybe we should revert the eth1 chain instead of the beacon chain because it was attacked. But that's, we'll see. Cool. Slashability of block proposals. It's really cheap to sign things. So we have to make it, but as opposed to improve work, it's very expensive to make blocks that look valid because you have to exert the computational power. So if I've been chosen to make a block, I need to, the protocol needs to make that expensive so that I can't make a ton of them. So essentially we have a very simple slashing addition that is making sure that I'm not making two of the same block in the epoch. Actually, in a soon to be released version that's changed to me not making the same block in the same slot. But essentially it's like a no double vote, no double proposal mechanism. Committees. So I don't know if we've explicitly talked too much about committees. We've talked about like how we shuffle the people into committees, but essentially within a given epoch, every validator is assigned to exactly one slot to attest to it, to create an attestation. That data structure that does all the things, it votes on the head for the fork choice, it votes on a Casper refugee vote, it votes on a cross link. It does all sorts of stuff. So I can use this function to essentially query which slot I'm assigned to, what shard I'm assigned to, and I can do my duty. I get a look ahead of at least one epoch. So during the current epoch, I know my assignment in the next epoch. This allows me to sync whatever shard I need to and kind of like get ready for my duty. This is actually tunable in men's seed look ahead. If we, for some reason, needed a longer look ahead because the overhead of syncing a shard or something was long, we could tune that constant. It's a trade off between kind of the longer you have, you know the committees, the easier it is to potentially toss or bribe or whatever. So I make an attestation. Yeah, I had the cursor highlighted, so that's why it's green. So I make this attestation. I do it at the slot I'm assigned to that time plus half of a slot duration. So the idea is under optimal conditions, a proposer has created a block at the start of a slot and by halfway through the slot, I've gotten that block, I see that as the state of the world, I vote on it. In certain non-optimal conditions, I might vote on some prior block, but I can still kind of add weight as Vitalik showed in fork choice and I can still manage to begin, still keep the chain moving forward and finalize things. I run my fork choice and I get, I see where the head of the chain is, the state relative to that head of the chain at the slot that I'm assigned to is also gonna give me this information. So I can actually just go into the state and say, hey, what was the checkpoint? What are we voting on right now? And just pull that information and construct this data. This is actually just stubbed in phase zero, but this is relative to me running the fork choice on the chart chain. That's not super important. That's actually what I'm gonna put into the crosslink which Justin pretty much covered. The fun part is that because it's phase zero and there's no chart chains, so just a zero hash. The custody bit, so this is, I just wanna show you this data structure because even though it's phase zero, we don't have a notion of, we don't have any chart chains so we don't have this notion of custody games and having custody of shard data, but there is this notion of like a bit and a custody bit which is tied to my personal, a personal secret that I have and the attestation, the crosslink that I'm crosslinking. And so I'm not actually signing just the attestation data. I sign the attestation data with my bit, a zero or one. And so for any given committee in phase one, we'd have two versions of this aggregatable signature, the one with the zero bit and the ones with the one bit. In phase zero, this is stubbed as a zero bit. Right, but in this outer data structure with the custody bits, so we remember who participated with the aggregation bits and we remember which custody bit they participated with so that we can reconstruct the proper message to validate the signature in the future. In phase zero, when I'm constructing my attestation to broadcast, I flip my bit. My position in the shuffling of the committee, I flip that bit and for the custody bits, that's all zero. And I broadcast that to the network at the halfway point of the slide. Let's see. There are some micro call micro incentives related to the creation of an attestation. It's pretty much the various components of what I'm doing. Is the head correct? Correct being defined by what ends up being the canonical chain was the target of the FFG vote correct, was the source of the FFG vote correct, the cross link. So pretty much any of these things that end up being canonical, if I got the vote right, I get a good reward. And I also get rewarded for fast inclusion. This is, so the sooner the attestation gets included on chain, the more reward and this portion of the world reward degrades very quickly. And this is so that I don't, I'm not incentivized to like wait a little bit longer, see what everyone else is doing before I actually cast my vote. I want to move very quickly, get my vote in and get maximum reward. This is handled in process rewards and penalties. This is maybe not so surprisingly after our interop session. This is where we found the most consensus bugs on our initial networks. Obviously the calculations that deal with everyone's balances was where we have bugs, but been reading, writing some new tests and enhancing that, getting it ready. Slashability of attestations, these correspond to the two slashing conditions found in Casper FFG. Pretty much if it's not the same vote, it can't, I mean, if it's not the same vote it can't be the same epoch, essentially if you double signed an epoch you're slashed and you can't do this surround vote. So if we have this attestation that has a source and target we can't have kind of a surrounding attestation that just like jumps over it. That's it. Cool, yeah. Thank you, that was a long session. Appreciate you all being here. Thank you.