 Now it started. Okay, so welcome to the Merge Implementary School, number one. This goal is supposed to be the technical discussion around the Merge. Yep, thanks for the agenda, Danny. Yeah, and all the most of the governance discussion is dedicated to the all-core DevSchool, so this is just to focus on technical stuff. Today we have a huge chunk of discussion around the application layer. I think that if we will not cover some items from the consensus layer part, it will be okay because it's been discussed like for a long period of time on different conversations. But yep, the main focus for today is the application layer, so let's go through all these items and the discussion. But before we start, I'd like to turn to Proto to make an announcement about Iranians. Yes, thank you, Michael. So this Tuesday we announced Rionism, and this is a project around the EVE Global Skating Hackathon that's around two weeks from now. We have these two targets. The first target is to run a testnet with Ethereum 2 nodes that utilize the Ethereum 1 nodes for the application layer, just like in the latest Merge specifications, but without the transition overhead. And then the second target is to take this prototype further and develop additions like shared data. And so the goal is not to turn to like a lot of new gods, it's just not necessary for the prototype. The goal, or like the real failure proposition here for everyone, is this opportunity where everyone can get involved, where you can get ready for the development of this milestone. And then on the Ethereum 1 site, this means the implementation of this new API. We have these four new routes that have been discussed at length and will continue to iterate on. And then on the Ethereum 2 site, this means that we can take existing phase 0 implementations, modify them and put them to use in an early testnet. And then after we get running, we can take our time for production prototype where we can think about the fork transition, implement these different block types and do the whole implementation work for the actual fork. And so that's all that these are the essentials for a multi-client merge testnet. And of course there's more we could do and we are happy to support us that like we're not going to delay an early testnet for. And so if you can get this running quickly, then during the hackathon, we'll have this opportunity to have some developers on our testnet with using the EVM on proof of stack. And then we have a working network to test later production code against. I hope we can stabilize on some specification that we can have that later more complete clients test against. And then we'll stabilize the essentials like the RPC and unblock sharding prototyping. So the idea is that some of us like to take this further and implement these experimental things like sharding and then having this very minimal base ready with unblock a lot of work there. And then next week, we're trying to plan in an early birth call where we go over some of the organizational stuff. So we don't get started just yet that we're trying to prepare in the coming two weeks. And then after God tried to set a big date and organize that. And then this call we just focus on the general merge. Great. Yeah, thanks brother to this is going to be very exciting. I guess we will like work during the next couple of weeks on the stack wall. For the, for the merchant for sharding for the renaissance project mentioned by for those so stay tuned. Okay, so let us any questions regarding the renaissance. So just confirming the goal is to make a prototype of a post merge combined application that gets sent to swear and not do the transition. Is that what I understood the initial goal. Yes. And they're already kind of there with some of the clients. But the specification is moving faster. I would like to align on this and get one off. And then after this first step, we can focus on the transition and like the more these more complicated topics. Okay, that's right. And the meta, the primary meta goal is to like have teams just have someone or multiple people kind of dig in understand specs understand complexity of this project and how things kind of fit together and also kind of feedback into the process of specification, so that we can, you know, come in May into May like have a really ironed out second roadmap here. I have a question to Proto. I'm working on this role also I have modified catalyst modified solidity modified tech. So I could not deploy this thing to any, any test net, it could work only, only itself. So I participate and I need with this piece of software and some proposed specs. Yes, tentative yes. So, I know your prototype requires an additional opcode to introduce beacon block roads. So, they are, they are not in spec, and there are other modification in RPC, etc. So, I think we can get a test net running early, but then it's for discussion which exact opcodes go into this step does not. And so if you do include this opcodes then we can prototype this withdrawal process as well. So, even if it doesn't go into like some sort of shared test matter spec for the time being you know we can run some transient tests and isolation and kind of testing, which would still be valuable. Okay, cool. So I will participate in it with withdrawal. Great. Okay. So, I guess we can one. So, yeah, the application where discussion. There is a doc. This is the high level design document rather than the spec. It's, it tries to give a holistic view on the consensus upgrade from the application note or from the main net note perspective. So, um, yeah, and I would like just to go through the main sections of this document and stop for discussion. So, and I think it makes sense to share the screen or how do you think it's better do. Yeah, that works. Before I start any specific questions to the document and not to the whole document you have anything. Is there an intention to make a parallel document for consensus layer perspective. I think that the, the way that the two specs are written are kind of, they're very big and chain centric, and our implementers are very familiar with understanding that. And so we could add additional notes I think to flush that out but I don't think there's like an immense value for that. I think, like, the challenge that I think is that we don't want to have a imbalance between levels of specificity, like, as is there even like, like, what would you say is the doc, the, like the corresponding thing right now for the consensus layer perspective that covers like say the transition condition and be can block changes and those things. So you're right in that. Yeah, right now there's only really the, the consensus perspective which really doesn't get into like the design document where it's saying like you actually use these methods to communicate your application layer. So, I think there's a parallel but I think it's probably a quarter of the length, so we could probably write it up and has more to do with like these, these methods. Okay, make sense. I also think that the transition process should be described like for both parties, like in details. So what one party is doing, then what the expectation like from the application layer and what consensus layer is doing at this moment so that make a lot of sense to give the understanding of the whole process. Yeah, but rather than the spec in the beacon in these two specs repo. I think we like don't have anything more recent. Okay, so, yeah, I think we can start with like new block format. Yeah, there is like the interaction between the consensus and application layers, it has like four messages. Yeah, I would start from the, we can get back to this like interaction. I would just start from the new block format because it looks like a simple thing. So, yeah, because the debates about extra data and so forth but what was proposed is to just set a bunch of fields that are related to the proof of consensus to the if hash just set to some constants and keep them on entirely on the application layer in the application block. And what will what is going to be exposed to the consensus to the beacon block bodies, the like the actual block, the actual main net block with all these fields just thrown away. So, and what will be on top of that is the block hash is the hash of this block. So, it implies that the application layer once it gets the new application payload from the consensus part will assemble a block and check that this block is assembled correctly with regard to these constants, but by checking the, the hash of this block is equal to the one that is given by the consensus part. Also worth mentioned that the consensus and application block trees are has a one one to one mapping. So, every, like beacon block as the reflection in the application chain and all the ports of the like beacon, we can collect it in the application chain as well. But the beacon chain is the primary one here and the application chain is secondary. So, why does the application layer have to hash it isn't it receiving in a through a trusted channel like does doesn't the application layer trust the consensus layer to give it a valid block, like in terms of things like hashing the block. It trust but you don't trust other peers that send you a block so this is a part of block validity process so this is one of the validity conditions. Receiving the payload, but the the consensus layer doesn't actually check the consistency of that hash with the payload and it's actually asking for the application layer to check the consistency of that. I see out of curiosity why is it that direction since the consensus layer receives a block first is that correct. Yeah, that's right. So, wouldn't that create a DOS vector sort of in the sense that the consensus layer is doing work to pass the block on the application layer before doing a very simple check just make sure it's reasonable. It does not create a DOS vector because before this part. Yeah, start to work I mean this application block process in part the signature that is on the big and chain the validator signature under the block. Okay, so the consensus layer receives a block verifies the signature, which means we have someone to slash, if it's bad, and then it sends the block down to application layer and application layer does the actual validation of the block itself. Is that accurate. Yeah, that's correct but yeah not this session part of it. So it's not going to be such just or point the block. So like it's, it's like if you did a proof of work on an invalid block that is going to be gossiped around the network but then it would be quickly dropped because the like it would be seen as invalid and there's a huge opportunity custom in doing so. So it's kind of like the analog. Okay, because you can only produce one block for your slot and so you sign. You sign two blocks or slot then you get slashed. And so if you sign a bad block, you just wasting your slot basically that right. Correct, and there would be a minor amount of work you can do on the network similar to like wasting your proof of work. Okay. This actually could be verified by the bacon chain part, but it needs to get RLP on board. Okay, chapter 56 to do this. So it's proposed to make it to the responsibility of the application layer. But hashing a block is on the order of microseconds. I don't think it's really relevant. You can just hash it and done. It's not it's not about execution time it's about like where the code complexity lives I guess. Yeah, but it doesn't layer doesn't. Sorry, go ahead. I just wanted to say that I guess as from from a developer perspective I would say that anything I get from somebody else I'm going to check anyway I don't care whether I should check it or shouldn't so. Nice. I think it makes sense for each one client to actually validate that the requests are meaningful and not just blinding trust everything. Actually one, one quick question here so when a bacon chain client passes a block to an applicant to the application client like over that wire is it's is the intent for it to still be an s in SSZ form or in like a list of fields form or is the intent for it to already be an RLP form. It's like a JSON payload at that point. Right, where the JSON payload just contains like the fields. So, so okay so that so the application clients would be doing the RLP. Correct, it would handle up the field to get some of these constant, like, like difficulty equals one and a couple things like that. Okay. That makes sense. Just a quick note for non devs who may be watching the recording. I guess there's a slight abuse of notation when we talk about consensus layer and application layer. Traditionally the application layer is what you think of uni swap and maker and all the depth. In this case, for us the application layer is kind of the EVM layer so it's the management of the EVM. The main pool, the management of the EVM states and, and, and all the execution there. And then the consensus layer is just the, the beacon chain layer and the proof of stake. I actually added a little bit of a glossary in the beginning of this document to describe exactly that. What is it like the consensus block application block. Yeah, it's very basic but yeah so we use this term so that it's more consistent. Yeah, thanks for this very helpful comment. I forget to mention this like notation, you know, implications. Is there a better name for it? For the application layer. Actually a bad name, I think because that's already in use execution layer. Yeah, something like that. The execution there. It's not it's not actually about the execution only. It will cover much more than the execution like the core thing is the execution right but we should very take this one offline. Yeah, I think that we will, we can debate this for a while. Yeah, for, for the record it covers like the transaction pool it covers the manager, the chain management history retrieval so much more than the execution. Okay, so Peter mentioned that the geth like in the good design, the geth would validate everything that is coming from the consensus counterparty, right? Well I mean obviously things that we can validate I mean whether something is the chain head or not we're just going to have to trust you on that. Okay, yeah, yeah, I was just like, you know, I was just going to mention this, that there will be a trust, you know between consensus and education anyway, and it's going to be a trust trusted communication channel. But yeah, any sort of consistency checks and things like that are certainly valuable, especially if they're cheap. Right. And I guess, well, if the consensus layer will check the block numbers, right, that they are consecutive and check the parent hash that is matching the head of the, the previous head of the chain so the geth will do the same checks actually and check the gas limit formula so yeah that makes sense and then to do in both sides so it doesn't take much computation sources. Okay, so some particular fields, sorry. So in addition, you probably possibly will get to this later too so feel free to tell me to buzz off. But basically, current the beacon channel would notify if one client of these various events. Does that mean that there's a one way communication where the if one client is running the server and the RPC server so to say and if the client dials in as a client. Right. It, at least it proposed this way like the unidirectional communication, but we can think about like be directional by directional if it makes sense to do the way. No, I'm just asking for clarification on document I'm not saying we should do this. Okay. Yeah, yeah yeah so it currently it's unidirectional communication. So the consensus layer just send the message and waits for response and accord an action from the application part. Yeah, I think this is a natural design because like the whole point of the beacon change pretty much to be like, this is the head, I want to build a new head it's kind of the driver in that respect so I think it simplifies reasoning about it. Yes, there's one slide detail. For example, if, if I just start on the one client, then the client needs to wait for the client to do something otherwise it's just idling so it cannot initiate any actions, which means that I must wait for a new head RPC call and just wait until So I don't have the capability to just tell the, to client that hey I just crashed I just recovered give me a head. That's not necessarily a problem and I'm just again it's just exploring the design. Yeah. Interesting. I mean I would. You can even do that using RPC is right if you're streaming endpoints. Like it doesn't have to be that one party is always the initiator. It's just a thought. Yeah, I would presume if you do crash and you restart you're going to get a new head very soon. But the also the, well, we can talk about this but I think the beacon chain would then be kind of stuck because it wouldn't have it in point so it would be know it know it's just be waiting. This means you've got crashes and restarts and I'm going to wait for the beacon chain and the beacon chain will be stuck because it just lots of gas. And then. Yeah, anyway, so it's just fun to do them. Yeah, that's that's right and if the, if you have like two separated pieces of software. Then this is still like a client. So the consensus node and the application node this is still one client and if the application part just to crash, then we can say that the entire client crash so yeah it should be some kind of status message or pink or whatever or who to response into one of these messages because they will frequently send by the consensus there, like the block on your head that the application of crash them. And act accordingly on the consensus part so yeah but I think that this is the like implementation details more. So, and for consensus purposes for for chain management purposes like this unidirectional channel should be enough to go with. So, so just one sorry, I'm keeping the real in this discussion. So, although it seems like an implementation details the reason I kind of want to highlight that it's not necessarily because, for example, synchronization code or events that we react upon they also depend on what we can do. So it's, it's not just something internal it kind of some what drives synchronization to it and essentially we can do it either way so that's, there's no good or bad solution because either design will work. It's just the implementation kind of has to follow the capabilities of it. Anyway, I stopped the. Yeah, yeah, I need to think about it more. I guess. Okay, so some fields here difficulty and nonce are deprecated and set to constants. Why difficulty is one here is set to one instead of like being zero. I think it makes some, some sort of sense on the like network, but let's not go into details like it would make sense to keep the difficulty increasing for the probably if status message that is sent by the application client about by current if you remain at client one once it starts and connects to the other peers so I think we can not like focus on this stuff to timestamp timestamp will be communicated from the consensus part is going to be the time of the timestamp of the current slot where the block is producing or executing. So there will be no rewards for the block and yeah transaction fees or transaction deeps after 1559 will go to the beneficiary or to the coin base. So that's, that's it. About the block processing part. And so, there are rewards but there's no reward given out to like the no issuance given out by the, the EVM, you know, so that to the coin base anymore. I'll handle on the chain side of the validator. Yep. So does. Will that change when the coin basis balance increases from the perspective of it's usable. That's like, hmm. Would that would this be a implementation challenge. Well, yeah, I guess a block would be aware of whether or not it's post transition because I can check its next hash. Question once again. Sure to take a step back. Currently, when you are a block author, you are rewarded with ease fees and I don't actually know if that happens at the beginning of the block at the end of the block, but you definitely can spend them as soon as the next block happens. And is that changing with the consensus and therefore block rewards. So there's still a beneficiary there's just zero issuance that beneficiary the fees still go there. Okay, and the fees will be accounted for in as part of the state route like in the count tree, right. And so it'll still be usable like right away by users is that accurate. It's usable at the exact time it's usable now. The only thing that changed that we don't have this to eat their block like we have now. Gotcha. Okay. And so I'm assuming the consensus layer, when it tells the application layer to make a block it will send and use this coin base as part of that package is that correct. So I just wanted to highlight something I, I'm guessing most clients are already capable of it. So essentially once we set the block subsidy to zero, or maybe even with 1559 on main net. And the interesting thing happens that what suddenly you can have blocks multiple blocks having the same state route hash. This currently on main net is impossible because the there is always accumulating so you cannot have repeating hashes. But this, for example, can happen on click that works like earlier and we can be and it's a pain in the ass to always make sure that the clients handle it correctly. I wanted to emphasize that once we remove the block subsidy entry empty block will actually produce the same route hash as the previous one, which may or may not be desirable. I mean it's fine just clients need to be aware of this work. If we want to be lazy we can just decrease the subsidy to one way right. But in practice 1559 if it starts burning things, we still might end up with a duplicate state way but doesn't art the base fees and 1559 paid by the transaction senders and not the corn base. Yeah, but a coin you can have a coin base send one transaction that burns exactly like rewards with the face. So the state route would be the same, which is actually an optimization you could make as a 1559 because you can mine, you can get your first block on the wire much faster because you don't need to actually calculate the state route. And so this is this is theoretically possible even before the more aggressive London right. So 1559 goes into London. In fact, I think this is would be wise for miners to implement this because it gives them a slight edge. One more kind of related follow up is that the, if you have the overlay beacon chain, and you have this kind of application chain embedded inside of it. The application chain, I think, at this point in this current design could have two branches that have the same roots at the end of them, because you essentially you don't have this group of work, you don't have the nonce. You can have the same application payload on two different branches from the same parent. And so there, we said there's like a one to one relationship, I think we could probably make it. Eventually, there was a one relationship by including something in there but it is suddenly not a one one relationship but we can talk about that. Right, so it will have like two block hashes like it will have identical block hash right. Which is almost only fine because if if the beacon chain says that had this and that had to this from the, from the perspective of the application layer it's like, that's totally it's the same thing. It's just the beacon chain can be consensus on the same state in terms of the application layer. So, again, that's probably actually not much of a design issue. Yeah, but we're considering anyway. Okay, so when a bit forward the external fortress rule. It just means that there is no more total difficulty fortress rule and the application layer tracks the messages from the consensus about notifying that the reason you had, and it must do the rework or not the rework if it's not needed, according to what's new hat to the observation of the consensus there is currently ease so that's, it sounds pretty simple and but I guess it will be a big chunk of work to make the current minute client to modify it to like follow these external fortress rule but but It's in theory if it's okay as long as one specific condition holds and actually that was my question. So by setting the difficulty to one that's actually nice because we can still track the longest chain. So the question is, can it happen that the beacon chain will tell me that up until now I had a chain of three blocks, and the new head will be block number two on a size chain. Yes, it happened that it shortens the chain economical chain. Yes. Okay. Yeah, the difficulty will not work because in the like current fortress rules were a simple and each block is like self sufficient in terms of the folk choice so it adds some difficulty and you can decide right away, whether it's the head or not, but on the beacon chain the things like more complicated because the new hat can be updated. Like the block can become a new head like a couple of slots after it's been, you know, it's been observed it's been inserted in process so And that's why replacing these mechanism by just increasing the difficulty won't work. And currently on the beacon chain. How do we have reorgs and if yes how deep are those reorgs. So I'm not necessarily saying how deep can they be rather naturally, while the system is operating once the tactical that that happened. Has there been even a single rework on main net, like, maybe one. And blocks from time to time so presumably at least one node, the person who produced it is having to reorg out of that block. But other than that I don't think we've seen much deeper or and maybe we should at least anecdotally know, but we should take a look. In terms of the, like, I can tell you that there's nothing really deeper than one and I don't think that, again, that's just a kind of a local observation from a single node rather than the whole network. Okay, so the reason I'm asking is because, at least in that the whole synchronization and block propagation is a lot of ugly complexity is due to handling these reorgs and sidewalks and whatnot. But at least from synchronization perspective things can get a lot simpler if we don't expect reorgs off of course we need to handle it but it's one thing to handle it as a special occasion that happens once a day or once a week and it's other to handle it five times per minute. But if it happens only occasionally then it's. Yeah, I think all the time. Go ahead. I just wanted to say that they happen all the time on the test nets, especially on the big ones. So when the validator counts are high and maybe people are not running on the best machines then there's like multi block reorgs. The other place where it typically happens when people are sinking and they think they're already think they're not quite. Those are two common cases at least. The other thing that kind of fits into there is that you have you also have this notion of finality, which becomes kind of a natural place to do sort of state cleanups and pruning thing. Whereas I know that's probably now handled a fixed step. So that's something to consider is that you would maybe only do those actions upon signal from speaking know that there was finality. There can be a nicer, you know, essentially more optimal place to to perm but in the extreme might be a worse place, but in the thing a variable place at least. Yeah, not that we've seen any variance and finality on main net but you kind of have to be able to plan for. Okay, okay. So, the folk choice. Yeah. So, this like second condition second second like approach them you can update the headlight like if the new block is the child, the child of the current like chain had so this is, I guess this is the implementation detail. It might be taken or not. So, yeah, the main message here is new had so. Anything here before we move to the network part. Anything that probably missed, and we want to discuss. By the way, that's just a random question. You had, you have the two messages new block and new head. Does new head actually in the current spec or prototypes sending the entire block or only hash. It's not yet, you know, there is no new head and current prototype but it was supposed to send just the hash. New head could be using signal reorgs on things that it should already have in there. So she'll only be the hash. So, yeah, and these two messages are causally dependent so the new block and new head must be processed sequentially as they come to avoid weird case when the new head points to the new block that hasn't yet persist. And the like from the standard perspective, they will be consistent from the bacon chain perspective because new head won't point to the like the block that hasn't been yet processed. Okay. Yeah, and yeah also assemble a block. It's like to produce the new the new block. It should point to the already processed block as well. Yeah, that's also the dependency here. And yeah network, like what's the first change that the block gossip and should be turned off on the application side. It should be like deprecated and we can like we are now talking about the like if just imagining that the marriage has happened some like some few epochs ago and it's completely like proof of stake mode. We can touch like this corner cases in the transition process later transition processes like complicated. Yep, has a lot of edge cases and yeah so the block gossip is just doesn't work. Yeah that's because the application layer doesn't know about the beacon state about validators and it's just going to verify that the block is eligible that seal is correct. And so that's that's handled completely by the beacon chain after the image. So there is the state sync proposal and the block sync proposal. This is just a proposal with an idea how can how could this be implemented. So the basic idea behind state sync is that it can use the faster snap sync or whatever with the underlying network player. That is currently on the main net. So use the same messages. The only like the big change here is that the application layer will know what the head of the chain is, what the current head of the chain is, and it will be able to start download the state upon receiving this new block and you had. So request like new block will contain the state route then you had will say that this is the head. So let's just start download the state with that. Yeah with the state route. The chain history data, which are headers bodies and receipts. It would make sense to wait until the block is gets finalized. It will mean that there is one chain between genus starting from Genesis and ending up with this finalized blog. So it makes sense to not, you know, to not to to wait for this event to get rid of the fork management during the sync and just go backward as it is now in the passing. Download headers bodies receipts, of course. So and yeah, one additional thing here is that there is no need to verify the it hash anymore. Because it's proved by the proof of stake consensus of the previous chain. Yep. And meaning that when you're doing when you if you're handling kind of historic blocks prior to the merge that have any past you don't have to validate it because the chain would be finalized on proof of stake site and that the chain with a known head and consistent to get that chain is all you really need. The question is, do I understand correctly that the state downloader is just, you know, bootstrapped with some state routes that is taken from the wire from the observation of network, and then constantly updated with the new state routes as the new block and or new block hash new block coming from the wire. Yes, almost correct. It doesn't get updated every time the, the chain progresses because that there are about a few thousand modifications in every block so we could, it would keep downloading data that will get go stale in 40 seconds. So currently what we do is, if the root gets older than 128 blocks. That's the threshold for which get maintained the state. So if the root is older than 128 blocks, then we just jump to a fresh route. And this way we, we just restart stating every 15 minutes instead of every 15 seconds. Not essentially yes we are surfing the chainhead until the downloader until the block retrieval catches up. So it will make sense to, you know, and probably jump between finalized checkpoints and then. Now we will need to, yeah, and it can process blocks from the last recent finalized checkpoints just to execute them all. It sounds like you could, you could be told the head consistently throughout that process and guess can just make the decision on locally on kind of where it's updating where it's pointing to sync. Yeah, exactly. Yeah, I just have to highlight that yes, there's actually probably not a good idea to mix in the finalization into it because tracking a few tiny forks on the head of the chain is fine. So if I have to download 12 million blocks, it doesn't really matter whether the top two or three blocks keep reorg in each other that's going to be fairly trivial to maintain or to manage. And the finalized block. I don't know how, how deep is the finalization layer currently how many blocks. 64 blocks. Sorry. 64 blocks, 64 blocks. Normally. I, my intuition to was that kind of just signaling new blocks and new head is probably enough to keep consistency with what you're doing today rather than mixing in finality and I don't, I don't know if there's a big game on. So the thing is that after so even if we mix in finality and I think up to to the last finalized block and then start executing on top, the execution on top will need to do exactly the same side work thingy management. So I'm not saving anything. But anyway, it's, it's really a detail. I'm having a bit more informational context from the become no cannot work so it's definitely not a bad thing to know that something was finalized it just might be useless. Yeah. Yeah, I mean those signals should be fun and you can figure out what to do with them. If to do with them. Okay, so sink isn't process and there is need, there is a need to like notify the consensus where that the sink is done. So, like, what is proposed is just to you know have like more each status for each new block message but probably it looks like a crutch here. So, but that makes sense because if the application node is able to execute a blog then means that this thing is finished. Probably where it's doing it this way probably it works like you know, to expose to use the sink. Jason RFC method. So, that's, that's like, you know, not sufficient detail but just need to know that the consensus. Like, knows that the application noted get got synced and is able to produce blocks and doing this. But I think you want something like it's block number or something because it's thinking or whatever it might be just sink to half of the chain and then that just doesn't have enough peers to know that more chain exists so you better like look at to which like to which block the node was synced to so you so doesn't could it like basically execute the block or not. Well, once you enter proof of stake mode, the beacon chain is telling us the head so we know exactly whether we're in sync or not. Yeah, but it's also possible that for the first like 12 million blocks, we were stuck somewhere and we but we stopped syncing already. Why about if the beacon chain told me that I'm, I should be at block 15 million that I know I'm really behind. Yeah. So, this issue kind could arise right before the transition. So before the first take no takes over the difficulty check. Yeah, because as soon as we have when you had done with definitely can figure out if we synced to this or not. Yeah, but I mean this is probably, I mean it's probably a legit issue but only if you are synchronizing exactly during the transition which probably won't take too much time. Anyhoo. It's a it's a definitely a corner case that needs to be kept in mind. Yep, because I think we're talking about block numbers here I don't see block number and new block on your head so will we get that and the second thing is we are blocking the from the network we are blocking the block gossip. So when we get new block to the application layer the application layer now needs to download that block from our application layer parts like from the, or is the payload of the block going in like all this transactions and etc. Right so the new block will send the payload. It doesn't need to be downloaded from other peers. And regarding the number, yeah the number, the block number will retain so it will be as subsequent as subsequent block numbers of the application blocks as we have them today. So it will be sent within this new blog message as well as a part of the application by law. Okay, any any questions regarding this state sync or any concerns. Yeah figure. Yeah just a tiny addition that I think that's what we said about the new block it's true when we already caught up to to the head, but where we initially thinking, I guess we still need to just. It won't be gossiped, but we still need to like proactively downloaded from the blocks from the other peers on the application layer right if we just got a new head, and we have like zero state, then it's up to us to. Yeah, so the blocks in addition methods would stay the same it's just the block broadcast or the block announcements that would get moved out. Okay, okay so definitely people just will not use a new head and new block. Okay. Yep, and to give like the whole perspective. Once the client, this new client this new like combined client starts up with like a fresh state. With the empty state and the empty chain. What will happen. The first step is for consensus layer to catch up the head of the beacon chain. Then, once it's, it's caught up, then it will be communicated down to the application layer signaling that it can start that it may start to download the state or whatever or blocks. And the consensus layer follow consensus head and with with a sense of authority without the application layer being fully synced yet, or at all. I think so. In the long run, there's the nuance that the, the thing in the consensus layer that's dependent on application layer activity is a deposits, which, like, right after the merge that's not an issue because deposits are just like you. Or because if one data voting is an honest majority thing, but eventually we would want to get rid of that mechanism. And so I guess, like, the validation might have to be redone a bit like first you would think the consensus chain and if and if the application chain isn't verified yet, then you would just like take the deposit the deposit routes on the trust and then and remember them and then later on when you get the application chain you would check that everything matches up. There's a difference in following the head and necessarily being able to stay in the head. Like you wouldn't be able to build blocks with an application layer payload. And there's also, there's certainly grades of being able to follow the head like you can, there's a light client protocol maybe just following the be contained there's doing full validity checks there's a lot of. Right, but the default algorithm for clients following the consensus chain is going to be that as part of the process of verifying a consensus block will pass the corresponding application block along to the application clients and check if it's correct right. Right, right. So it's like if you just follow the proof of work chain and you knew that the proof of work was the biggest total difficulty but you never checked consistency of execution, you can certainly be sure. Yeah, can the Go ahead. Hypothetically, could someone running a consensus client who didn't want to or couldn't for whatever reason run an application client. Could they produce empty blocks if their turn came up. Like you actually need a full application client to produce a block or you just need something that can give you a thing that shaped like a block. Well, if there are no transactions, then the state who remains the same so you just, you just take the last new block you got and then you just feed it back to, or maybe just change the coin base and we did back to the consensus client. Right. That's what I was kind of wondering, can you do is that valid, like would that work. Based on the current spec yes. It seems to me. You wouldn't actually be able to kind of validate that you're actually even on the right chain right so it's a basically you could build on top of one block and just hope that would be a valid one, but you wouldn't would have no way of knowing. Well, if you trust the east to client that this was the head block them. I mean yeah sure you didn't validate it but if in general, there aren't attacker blocks in the world that it's a good characteristic. It's not a bad strategy into that or nothing, and you at least get your reward a few produce something. It's a bad strategy in terms of verifying the block. If you're a destined to block without. I don't know if this is a healthy or not. By the way, just to go back to the previous note previous question. I think the question was whether the beacon chain other consensus chain needs to be able to sync without the application layer. I think that would be a hard yes. I mean you can debate the trust model, but the thing is that the expectation is that the state try will only be available for the head few blocks maybe had 64128 blocks. And that means that in order for the application chain to synchronize the state, it must have the root hash, a recent root hash. So the consensus chain, the consensus client needs to be able to provide a recent. And ultimately rely on the application layer to get to the head and know that things were consistently processed rather than kind of knowing it in real time passing things in there during sync. The security model would be similar to to fast sync. Essentially just the download, not the latest head or not the latest state but some recent ish state, and then just execute a whole bunch of blocks on top and make sure that the nothing goes wrong. Yeah, you have to at the end of the state sink it's it as it is now it's execute some block on top of the most recent state right to catch up. It doesn't execute it to catch up with the head because it could download it could end up exactly on the head it is rather security mechanism that in order for you to to feed somebody a bad chain or bad state you would also need to mine 64 blocks on top. And that's the. The boxing proposal. The idea is pretty much the same. So the once the head is known. The application where I may download headers, backwards, backwards and reversed order, and then execute that application. Michael you cut out for like 20 seconds or 10 seconds. Sorry, can you hear me now. I think so yes. Something happened, happened. Okay, so it could be hybrid strategy when the application starts up and starts syncing blocks from first block and, and the same time, the consensus layer catch up with the head communicate this hat to the application layer and syncing forward and download and had headers in reverse order, then this chain, these two like downloading processes converts at some point, then goes forward so it's also like an option here. You don't know to do that. It's way too complicated. I mean, right, I go horribly wrong. And then you would need to check the proof of work and if you run out of proof of works, then you cannot verify anything. And the other thing is that if currently the proof of work chain is kept alive because everybody's keeping mining proof of works on top but once. Once the head is directed by proof of stake, essentially I can mine an alternative reality for Ethereum that is heavier than the original Ethereum. So you don't have to verify proof of stay proof of work because you're going downwards and you always have the information about the parent harsh. When you when you receive the hat from the proof of stake chain, then you have information about the parent house you always verifying the parent houses and at some point you reach Genesis, which means that you are sure that you, you reach the same genesis as you would be in the normal chain. It's actually how we currently synchronized at the moment so so it's exactly the same behavior as you have currently. I was just saying that you, if you start from the head, then you cannot also start from Genesis and meet in the middle. Yeah, I agree on that. Oh, sorry. Yeah, sorry. No, so that starting from head and down in the head of chain and filling it that's completely valid. That's sorry I didn't hear that part. Okay, so by the way, how much time does it take to download the chain of headers for the current main at 15 minutes. It's about three gigs. No, actually for 5.5 gigabytes currently. Yes, 15 minutes doesn't sound that bad, because this boxing is why it's like desirable for and useful for running their archive node. So the entire sync process after it's bootstrapped with headers will take much more time. I mean, it's the mechanism is very similar to turbo gas and it's header and body download is the minority of time. So it's, I don't think it's a problem. Okay, any questions for the network. What's probably missed here. Anything that comes to your mind. So I have a question about, do we consider here any adjustments to where the bodies are stored second. Did you consider discussing that also with this idea that comes from Piper's team at Trinity on the DHT around blog bodies because blog bodies are quite heavy so they take 150 gigs nowadays. And they take a bit longer to download. So, do you consider the nice thing. Yeah. The nice thing about this proposal is that it doesn't. It's not opinionated about what happens kind of on how the application layer gets things. And so, by default, we reuse things as is, but that can those promises can be broken, you know where things are stored can be changed protocols can be changed, but for the purposes of getting this merge kind of in place and basically expected that this, this does work whereas similarly in the future if you if you move bodies to DHT, and you had a different way of retrieving them and like the, the current protocols were the promise that they were there was broken, you can still get kind of set head from from beacon chain and choose how to go retrieve that information. Yeah, they definitely decoupled in a nice way but we still should be pushing on and considering how to make that sustainable. Okay, makes perfect sense. Thank you. Okay. Anything else before we move to transition process. Okay, so the transition process, the like the complexity of this process comes with the requirement for the software to like be able to operate in these three modes. So the software, like the client will be, you know, updated, like some, some decent amount of time before the merge before the like potential point of the merge. And then it will have to operate in the proof of work mode for unless the transition conditions are met, then transition mode camps, where the, like the transition mode means that the total difficulty is replaced by the external fortress rule by the fortress rule that is driven by by the beacon chain, but blocks are still gossiped on them on the application network. And yeah, and once these first, first block that is proposed by the proof of state processor gets finalized, then the software turns into proof of state mode, which is after the merge mode or yep and operates in that normally. So that's the complexity of the process. And yeah. Okay. Okay, what, what happens in the, in the case of consensus issue between major Ethereum one clients. What do you mean by consensus issue. You mean like there is a consensus break right so one chain is, yeah, I see. So that's interesting. It depends on how big chunk of, you know, it depends on how big chunk of notes on like the main net went out of consensus so if it will be like the same two notes that are listening to minority theorem one client will be slashed or won't be slashed, they will be orphaned. So their blocks will be orphaned by the folk choice rule of the beacon chain. Yeah, so there would be penalized slowly, you know they would stand to lose about as they stand to make. So like if they were on that orphan chain for a year they would, you know, lose. 100% of their, their stake. And yeah, it's similar to what would happen today, you know, if 75% of the minors were on client a and 25% around client B and client B disagreed, then, you know, the chain weight of client a would be much greater than the chain weight of client B. And that kind of thing kind of resolved, but it ultimately becomes like a what is correct, and which which version of software do people need to fork and go in that direction. But in terms of proof of stake you have greater than two thirds on one of them. There would be like a finality signal happening so that's something to consider so like to epochs after this chain break. So the process would likely finalize, which would be a much stronger signal than say just a pre-worth. But even then, it's definitely a catastrophic scenario for depending on how much of the network is on that, and should be avoided similar today. Thanks. Okay, so the transition process, like on the consensus layer looks like as follows. So there is the total difficulty. The value for the total difficulty. And once it's reached by the main net, the beacon chain will track all the blocks from the main net. So it will already be a combined client with the beacon chain and the beacon node and the application node, which the application node is still operating on the proof of work conditions and once the total difficulty met, the consensus layer will take this block, this last proof of work block, and build a block, the first proof of stake block on top of this one. And yeah, communicate this to the application layer. So it will send this block in a new block message and then in the new head will be sent accordingly. And once the application node receives these message messages, it turns to the external fortress rule and starts following these messages from the consensus layer. There will be also sample block for some cases when you will have to produce a block. So it's eligible for the proof of work as well. And then after some time, the finalized block for the first finalized block message is called and then the application node understand that this is the time to turn off the block gossip and turn to the proof of stake mode. So that's how it's look like from the chain progress perspective. Yeah, figure. Yeah, just a question. So basically, if you're in proof of work mode, then if you've been choosing as a block block block proposer on a beacon chain, then you will receive a sample block. If you've been chosen as a block validator, then you will receive a new block and the rest will receive new head. Right. When the first block is being proposed. Right, right. And when you send this new block to the application node, and the application of doesn't have like the doesn't know the parent of this block, it will have to download its parent and its ancestors. Yep, to verify it. So, otherwise, the tester will not be able to test to this block unless it's fully executed. This is important. Yeah, Peter. Two questions. One of them, you mentioned that while we are in proof of work mode beacon chain will follow the east one chain to figure out when it wants to transition but how exactly will the beacon chain follow. Probably by calling the kid. Yeah, that's a good question, right. Like it get blocked by hash currently contains total difficulty, right. It returns total difficulty. Total difficulty. Yeah. No, I don't know. I checked you. When I checked the RPC interface gets a list of the, the doc said that it returns total diff. Yep. Yeah, but it doesn't we can always add it. Yeah, they will have to be an RPC and point that will contain this total difficulty the block hash, like they had, you know, the disinformation for the block and also it will need to be. It will have to contain the flag where whether this block is valid or not so it also required. The, the beacon note already like asked the application note for like the head because it has to do I do if one data voting right. Right. Yeah, I mean there's they all have are able to communicate that you find out today on some limited interface about state and head. And it makes sense to, since these RPC methods, if they will be implemented as Jason RPC will sit on the separate port for security reasons it probably will make sense to implement one more RPC method that will aid this process. Okay. Another question. Here you have between the proof of work mode and the proof of stake mode you have this transition mode. Isn't it just, I mean I don't really understand why we need this middle ground. I mean obviously we didn't finalize a block yet, but why does that matter. We want to block gossip to keep working on the application layer until the we get the first finalized block. So, if you expect. So if these four times expected to receive new block and you had then how. I mean, I propagate a block but I don't know whether it's relevant or not I don't know how to choose the fork. I mean, what I guess the idea is that like while the transition is not finalized. There is still the possibility of the beacon chain reorging to a different beacon chain that has a new first embedded block. It's to check that different first embedded block with that first embedded block would have a proof of work parent. And that proof of work parents like could potentially be well would be mined and it could potentially be mined at some point later, and so you still need the ability to keep broadcasting them. But after the trends the first embedded block is finalized, then like, there's nothing that can possibly happen on the proof of work site that's relevant to the validity of any beacon block and so you can stop broadcasting for four blocks. Okay, but then the important detail here is that I'm only broadcasting the proof of work part of the blocks so if I have three new proof of work blocks and the hundred new proof of stake blocks and I will only broadcast three proof of work. Correct. I don't think there is any reason to broadcast proof of stake blocks over using the application clients. And you could even probably not, you could even restrict proof of work block gossip such that you only broadcast blocks that are past the total visibility of one by one block and no further children on such chain. Because the only valid blocks to include on the proof of stake side would be like that first child pass the total difficulty. Or, depending on how if you're doing greater than equal to just right at the threshold. Okay, that makes sense. Okay, Peter anything else here. Just a quick question in proof of work mode you have the assemble block. And when I call the assemble block that means that I'm switching over to proof of stake I don't need a valid proof of work mind on that one anymore. So I just need the transactions executed and that's it. Yeah, right. So this is just give me a block on top of this one. Okay, and in this case, the block hash. I mean the point so I cannot. Okay, never mind. Actually, assembling a block will happen on top of the current head. So the current proof of work had this is likely to happen. So the first proposal of the proof of stake block with embedded application payload will get its head of the proof of work chain, proof of work chain, and ask to assemble a block on top of it. Yeah, yeah, and it might not be the head for the rest of the network, but until this block is valid until this proof of work block. This last proof of work block is valid and meets the transition conditions, then everybody is okay with that. So it's valid. And Michael also raised hand. Have any questions or then you already answered it. Okay. Anything else for the transition part. And we'll, yeah, by the way, one thing here is that the transition is that clients with the transition mode will exist sometime after the merge happened. So if you started this client, it will need to be designed in a way that it can handle everything correctly. So the merge happened, but it doesn't know about it yet. So it will need to take this information from the network, but I don't see any big issues with that. Yeah, definitely worth to think that through. And after some time, you know, after some time of the merge, I guess the clients will just throw away this transition process from the code base. Interesting question. So, essentially, when I start up an if one client, I need to know whether I'm in proof of what I should start thinking the network using proof of work mode or whether I should wait for the beacon chain to give me something. Yeah, so that's, I guess it could be an implementation detail so you may have this flag, right, if you know that the merge has happened. But if you if you don't know for some reason you can start operating the proof of work mode. And then, once the some message comes from the proof of stake like new block and you had to kind of, you know, readjust your mode. Yeah, no can be client can readjust its mode. Yeah, it's going to be a bit messy. For example, if I start up a fresh client combo so neither one client or need to client are synced at all. I don't know how much time it takes for the client to sync up but up until that point each one client will do weird things. I mean, I will start downloading the state route for something that might be completely invalid because right maybe some minor remained on the original chain and didn't work off and then they are just advertising some we have states that I will actually sync to or try to sync to up until the point where the proof of stake notes says that hey, hey, hey, stop much already happened. Yeah, yeah, and you can like at the block hash for the first proof of stake block, you know, so just to the same way they dial hard work block has been handled on the network. If it helps but it requires some intervention, you know, so this is the same to say that the merge has happened, you know. It requires some intervention from the user. Yeah, so I guess it would be advisable to, well, for one thing to have some form of flag or somehow to control this and for the other thing to for client desks, I mean if one client has to maybe do a release after the merge quick release to just flip the switch. Yeah, I mean, if it's still requires to sync that all the old like proof of work chain. Then I think it makes sense after the merge to make a kind of release that's kind of hard codes the block and maybe the root hash of the last proof of work block and just start syncing to this block immediately because why not. Even though proof of stake might not be synced yet, because you will know after after the merge actually happens. You cannot really sync to the to that specific state because it will be proved from the network so it's the I guess the only thing is to know whether the merge happened or not so that we know whether we should wait until single. If I may enter a person. Let's go. Sorry, we can't hear you well. Oh, sorry. Very crackly. Oh, sorry, I can't. Yeah, it's still it's still it's still very, very bad. Could it play, could you make me be a question to the chat. Okay, Micah. Would it be reasonable to assume that, given the most pathological situation where miners are running some custom code towards the end, that we should expect to see many, many peers of the last block. So we will just see, like, assuming miners are running customers again, no client that is going to code this but assuming miners all write their own code. The rational thing for them to do would be to just repeatedly mine that last block over and over and over again right. You mean to not to avoid meeting this transition conditions right to avoid not to avoid me because condition just to increase your chances that your block is picked as the last block is like your last chance to make money. And there's no reason you've got hardware it's running. You might as well just keep mining that remind in the head. Right. There is, I don't think there is a reason not to do this for minors but that's okay for the transition process because yeah the proposal will pick whatever they had it absurd if it's a moment. I mean that's basically like we want that block that's not know that's best that's good. Who cares. It's also not very incentivized because once the beacon chain picks a block to build on and there's attestations quickly becomes very difficult to reorg that the beacon chain the minor can't do anything to reorg you can change that point to try to get their new head around that point picked in, unless the minor is. Yeah okay we have like one more minute left. Any significant questions so far, or we can continue offline. Okay cool, like my last question. What do you think the reasonable next steps towards the EAPs to specific towards specific specifying this all for the application layer. Do we want to discuss it more like go around and around through details or yeah does it make sense to work on this back. So personally for me this is fairly clear. I mean, the exact format of these assemble block me block me RPC methods they don't really matter so whoever writes up initials back I think we can run with it I think there's already an initial implementation catalyst so I don't think if there's a need to detail it too much. One thing that's a bit murky for me, and maybe that would be nice to investigate is what happens. So with the, with the difficulty. And, okay you might say that this is a client implementation on how the clients handle canonical chains that are not the most difficult ones but I think that might be an interesting thing to discuss a bit. You mean how to replace this folk choice. No, no, no, I mean client implementation wise how to make sure that. So currently, the clients have it hard coded so to say that the clinical chain is chosen based on difficulty. And it might be worth what to investigate just how deeply this choice is is rooted in the clients. So do a simulation of that or is CLEAK independent or just quickly. Sorry, the CLEAK consensus mechanism. Does it simulate some sort of total difficulty or is it totally independent of work. I'm implying if we have CLEAK, that's not proof of work so I assume that it's maybe not too deep but you can you help me understand. The total difficulty is independent of proof of work so CLEAK also uses total difficulty for the folk choice. Okay, interesting. So in CLEAK, essentially what happens is that you have a batch of signers that can sign but every time one is in turn and everybody else is out of turn. And then if you sign when you're supposed to sign then your block has a difficulty of two otherwise one. But there's always one block that is heavier than the rest. Interesting. I didn't realize that so it might be deeper than my expectation. I just kind of thought that ETH1 clients can handle generic engines. I'm guessing ETH1 clients do have some kind of a lower level set head method right that just like sets the head to whatever you want and does a reorg. That does happen. There are few problems with it. At least for example in depth if you do a set head then that kind of deletes everything afterwards so you cannot just jump between chains with the set head because it nukes things. That's what's supposed to be a rewind method. And the other thing is that for example what get does when you get the side block is that it doesn't import it. It just stores it in the database as a flat thing. It doesn't execute anything and it only starts executing the size chain once the total difficulty exceeds the canonical one. And then we have this implicit thing behavior that we need to hack out somehow. I'm not saying it's hard or not possible. I haven't thought about it but it's a non obvious question. You said it nukes portions of the chain. Do you mean is any descendants if it were to reorg to not a head or what? I didn't understand that. Sorry. You said set head can nukes things. I didn't understand what you meant they can nukes. So for us, I think Vitalik mentioned that there's a method called set head in depth, but that set head is meant to rewind the chain. It's not meant to jump between branches. Got it. Okay. So you would go to a common ancestor if you were jumping between branches, then you would cut head to the head of the branch after that. So set head is just, okay, you can rename it rewind head. You can only go backwards. You cannot go forward. Okay. Yeah, so cool. Well, it's just, I think we should wrap up and continue fine. This is a good comment. This is a good comment. I think that exercise was difficulty probably will be a good task for the hackathon. And to do something with any of the clients. Also, there is like a question or proposal, like to. Yeah, total virtual for total difficulty value transition D plus slot number. It's not going to work because the in the become chain the block is not self sufficient in terms of folk choice, because it's a test later. And the station could be included on chain like one slot after and it can, it affects the, it has an impact on the folk choice so virtual total difficulty will not work in this case. Okay, so anything else before we wrap up. Fantastic document. Thanks, Danny. Thanks. Yep, yep. All a great job. Okay. Thanks everybody for this great discussion. Yep, I'm happy that we go went through all these documents and even happy that we didn't cover other aspects of the agenda. And that's what it can be do later on the subsequent call so this call is bi-weekly so anybody who wants to have an invitation. Let me know just the email address to see you tomorrow on the old school. And it's on this it's now on the shared calendar for these types of calls. I don't know who made it. Yes, it is. Yeah, so, yeah, I added I added it to the, there's a like protocols call Google calendar, which lists all of the different each one each two and now merge calls. I'll put the link in the chat right here. If anybody wants to subscribe to it. And these calls are included there. And the cat errors will be documenting the notes for the call. So maybe it will be available and will be posted in the GitHub repository. Thank you everyone. Thanks everyone for joining CS. Thank you. Thank you.