 Hello, everyone, welcome to all core devs number 121. I have a couple things to discuss today, most of them are related to the merge to make things there so Mikhail put together a document a week or two ago about the consensus API for the beacon chain and the execution layer to communicate after the merge. We discussed this in a merge call and then brand way out of time so we can kind of continue that discussion here, and then a bit later in the call, Felix from the get team put together a spec for basically what a post merge sink algorithm could look like. So that's the other big thing we need to discuss there. And then a bunch of other, other topics. But yeah, Mikhail, do you want to start maybe just like give a quick like one or two minutes, you know context of the document, and then we can kind of resume where we left off last time. Okay, thanks Tim. Thanks for allocating the time for this discussion during the sd. Yeah, a bit of the context, we've been. Obviously, there will be like two counter parties in the like clients software after the merge, which are the consensus client part and the execution client part and we need the communication protocol between them. So in order to communicate with blocks and other stuff. And we have something already, which been designed for the Iran is package on project. And it's been based on the Jason or to see but we might want to extend this protocol at some other stuff and other restrictions to the underlying communication protocol so there is a dog dropping it to the chat. And that's the shapes and outlines the science space for these engine API. So it is the all in the link there is a consensus API so I just haven't changed building but we do this renaming from consensus API to the engine API. And yeah we started to discuss this document and stopped like not far from the beginning. So I'm going to share my screen to continue the discussion from the, from the point we have stopped at. Also, I've made some adjustments to this document and updated it with the result of discussion previously had some sharing my screen. I'm not sure. Okay, yes, go ahead. Yeah. Yes. I'm sorry, is it the right place. Okay, anyway, this perfect. And what's on the agenda. Yep. I'm sorry. Yeah, this is the agenda. Oh, okay, okay. Yeah, this one is the document right. Yeah. Okay, cool. So I would encourage us not to fall into deep discussions right now and if any item that we are discussing requires like to have like more deep conversation on it. Let's continue. Let's just mark it as the like requires some further discussions and continue a fine on discord or make like other kind of call. So, not to spend much time on every on everything. Yep. So, let me turn on the chat and purchase fans. Okay. So, I will start in from from the above. We'll go through the comments a bit. Here is the comment from the attic that we should consider rest for this kind of API. I'm a bit, I'm a bit unsure about the rest and I think that it will not see you as well for this API, but we are still in the designing stage. So the things might have might be changed. One of the things that will that rest might not work well with is the bidirectionality of the protocol. There is a couple of use cases that might need this protocol to be bidirectional which means the execution client may initiate some trip. The other thing is that rest is related, like, is about the resources, which are some entities so I'm not sure if this repeats this protocol as well. So, but yeah, that's a good if you hear us just if you want to discuss this let's discuss it in this court more real quick there are usually the counter that is server side events, which can facilitate that with wrestle HTTP, but I'm not. I'm just putting that out there I don't really want to discuss it. Yeah, yeah, sure. Okay, so on the previous call, we decided to like to replace the sample payload with a couple of related methods. And now, so there is the prepare payload which gives a comment to the execution client to stop building the payload. It has these parameters here, and it will keep it up to date until the get payload is called. And this, and this process of producing the paid load stops then, and the pay and the most updated and the most up to date payload is returned back to the consensus client and then it will it can take them and that into the beacon block and fire this block into the network. Yeah, there is a note that if the prepare payload is called if the if the prepare payload with another set of arguments of parameters is called after the first one then the process of building should be restarted with this new parameter set, which makes sense, as we can, like as the consensus client may receive a new block and it may become the head of the chain and it might want to restart this process, because it will build on the other block, it will build its block on the other one. Get payload have the same set of parameters here. I argued that it should not be so but the reason why they are here is the first of all, it can be used without the prepare payload this way so it can work as the assemble payload. There are any use cases for this or this like property, but the other stuff which I think more important is the additional consistency check, because the this like new block, maybe received by the consensus client and the prepare payload might be sent before the get payload with the same parameters or processed like, yeah, there could be a kind of racing between these two messages. This is a very core edge case, very much an edge case but it could potentially be the case. This is why here is the set of parameters as well. And if the, if it's not, it does not match to the to what was sent with prepare payload, the block should be either adjusted if it's even possible or created a new one with this set of parameters and return back to the consensus client. And this is to avoid the weird case when the consensus client, the poses a block with a payload that does not relate to this block. What do you think about this like additional consistency check. I think that makes sense. I think it also leaves optionality for a client to not support prepare payload and just do on demand gets, which, you know, probably is not an optimal strategy but is probably a reasonable thing to leave in there so to have full information makes sense to me. Okay. What would be, for example, there would be change in the parameters. What is the expected time that they get payload would return this new one expected previously block. This is a good question. So the default behavior might be just get all the transaction from the mental requires some time to execute them build a block as usual in the usual way and return it back as fast as possible. Yeah, I mean that might end up being a little bit of implementation specific on how to handle that strategy. I don't know if it needs to make it into the actual protocol definition. I do worry a little bit about it. I do feel like there needs to be some kind of expectation set, even if they're not like part of the protocol just in general, because if you say get payload without a prepare the node starts building a block and they can stop adding transactions at any time. And so if there's a bunch transaction mem pools that are, for example, attack transactions that are consuming a lot of time, the execution engine could at some point have a time out and say, okay, stop trying to build a new stop trying to add transactions, cut it off here, send the block because we're not a time. If you don't have any kind of sense of how long is acceptable. Then presumably the execution client is going to just do whatever it normally does to get a block which maybe means hitting a roast server maybe means just building until the block is full. And these things can take, you know, seconds, 10 seconds. What also might be pathological scenarios. What is be able to return much quickly which is empty. Yeah, and what my case is also related to the prepare the load. So it should be related. If we are adding this kind of protection to the protocol. Right, like what's the prepare payload at least you've got like this idea that if I get a prepare payload and I start preparing and then I get a get. I stop whatever I'm doing and I give them best I've got like, as soon as possible instantly if that means all I've got is an empty block then I can send that right away. Whereas if you all you get is the get payload, then either you default to sending only the empty block because you have no time to prepare anything, or you decide that I'm going to spend some amount of time actually build acquiring a block. And there needs to be like, you know, some limit on that presumably like you don't want two minutes for example that's obviously wrong. And the 10 seconds long is five seconds, two seconds. As I understood you were like saying about malicious transactions in the mental that could take a lot of time to execute. And in this case, we might want to add the this time restriction to be prepared for a lot as well, because prepare payload has much more time in advance, right, and it could include all those transactions. This is what I was mentioning. Mike, are you suggesting that there should be a note about an expected return time. And that's not necessary that makes it into the spec. I don't even know if it needs to make into the spec. I just think that we should give execution client devs enough information to like maybe they differ a little bit but like, presumably the consensus clients will have a time out on their ends. Yeah, and we should. In that sense, I think you could but note this is expected to return, you know, a viable block within 500 milliseconds or something maximum. But that is that that ends up being like, yeah, which is reasonable, which is. Yeah, and I think that would be, I'd be totally fine with that if it was in this again I'm totally fine if it's also just like something that we just generally share amongst each other I just want to make sure it doesn't get left out and forgotten is all. Yeah, I do see value in doing this, but if we want to discuss more on that let's continue on the discord. What do you think. Yeah, I'm always happy. Cool. Okay, so, so let's move on. Yeah, and execute payload. So, verifies the payloads according to the execution wire and rule set, which is exposed in the IP. This is a question from Martin, what if the parent block state is missing some error type for that be defined. These documents, like has a section of the consistency checks of the consistency checkpoints, which answers this question. So, once we get there, we can discuss it. But the basic idea is that if you execute payloads and something that can be processed because kind of processed by the execution client because of that sense, because some information is absent. So the execution client response with the corresponding message that something is wrong. Yeah, consensus and execution client starts the recovery process. This is one of the option options or the execution client goes to the network goes to the wire and tools of all this data. This is the default at Danny. Oh, I was going to say. And this is getting ahead of ourselves but I think in a, in a, in a sync protocol, it is going to make sense for the beacon chain to be optimistically processing forward without execution validation and I think that likely it's most simple to handle most optionality of the sync protocol underneath or to continue to run execute payload and just continue to send the messages to the the execution layer. And in that sense, I think there might be value and having an enum that's like valid and valid known and maybe processing, such that it knows that it hasn't been fully validated but it kind of continues on, statistically, but that I don't think we can make that visit in without talking about a lot more sync so we can do that. Right, this doc, this like doc has a suggestion on the like sync status return instead. So, yeah, it's, it's optional so it's also covered here, but it depends on the sync. So, yeah. The consensus validated message, which is mapped on the first day consensus valid if you went from the IP. It's sent to the execution client by the consensus client when the, when the beacon block gets validated with respect to the beacon chain see transition, or like the AP says with respect to the consensus rule set. So this is required. Before the block can be persisted by the execution client, even if the execute payload returns, even if the payload is valid over the respect to the execution wire and rule set. Yeah, here is the processing a block and who hadn't checked the proof of work, but you'd have done all the processing of the block otherwise and then someone said hey the works out as well. Right. Right. Yeah, thanks Danny for this person. So here is the blog processing flow. You can check how these messages could be sent like there are two options. So the consensus validated maybe send like, while the payload is being processed or after that. So this opens up like, yeah, the alternative would be to send execute a lot after the beacon block has been imported, which will cause a delay required to process the beacon block and like sub keeping these two messages separate opens up the ability to parallelize the beacon block and the execution process, which is nice. So the next one is the any questions here any questions in part. If a consensus validated message is sent without an execution payload, or sorry, execute payload being run with that, then run the execute payload or not. But if it's sent before the corresponding like my passes execute payload and just runs consensus validated with that. Just trigger execute payload plus consensus out they didn't return it. If you had it. Yeah, if it's this message is received but the payload is unknown. Is this the case you're asking for. The execute payload has not validated is called on that last that be a trigger to kind of like run all the processing. And we can just note that as like a weird edge case to think about. Right, right, but it could be cashed by for a short time, like consensus related stuff in the memory and wait for it to payload. Yeah, this there is a like a cash section like here is execution about cash it touched this question a bit. So the order. So you could say invalid, you could cash it or you could say hey this thing's telling me it's been validated I should process. I should process it. But I guess. Yeah, if it's invalid than you. Yeah. Yeah, so there are this one option to. Okay. Yeah, check in the chat. Okay cool engine for choice updated. There is the PR to the, to the IP, I'll drop it into the chat that unifies the two previous events, which was the chain headset and the block finalized into the one. So, this document is matched the follows the IP country or vice versa. Anyway, here is the suggestion from the previous call and coming from Micah confer. I've called it concurrent block hash, which means that this block is confirmed by the third of the testers in the network. They have been they have voted for it. This is for Jason RPC. For users Jason RPC. Actually, here is like a bunch of stuff. So this. Yeah, there is a head block hash and finalized block hash, which must be and the concurrent block hash all this information must be updated. All the changes related to these method call must be applied. Atomically to the block store. So, though, in order to avoid where this is when the head block, even for microseconds points to the to another fork than the finalized block hash and the confirmed block hash as well. So, there is one out of this unification there is one note here. This is more for consensus client developers. In the IP. This event should the finalized block hash before the transition before the first finalized block hash. It should be stopped with all zeros. So this event will be said will say will be sending the actual head block hash. But the finalized block casual will be stopped with all zeros before we get the first finalized block hash actually in the system. Like, no additional work required to do this kind of stuff, because after the marriage work we have, we will have the execution people in the block, which filled with all zeros. So, the finalized block hash. We will have this block hash already stuff serious. I'm sorry, I could be a bit massive but you can read this. Yeah, but should be enough to understand what I've just talked about. Yeah, this was my first try on the introducing the confirmed block hash stuff. So, it's just stand for each, each block the status. I'm just wondering why is so the engine consensus validated versus engine for choice updated. What information do you get like, why are those two separate like it feels it. I'm guessing I'm just missing something but it feels like they're saying the same information. Certainly. Or just the consensus validated means like I checked the proposer signature I checked the attestations and the other like, kind of outer consensus components of something I previously had you execute and check on the execution layer, and that you can put it into your block tree. Updating the fork choice has is independent of the fact that a block was valid to insert your block tree, and a block that I inserted into your block tree may or may not ever be the head, or in the canonical chain. Right. Consensus validated will happen before the block makes it into the head block in the normal case. Yes, yes, it's kind of like outer consensus stuff the execution layer can't validate and it's the confirmation that all that stuff was also valid. Okay, so the execute first, because the client will say, hey, here's a block please execute it. The engine, the execution engine executes it replies back this is good, since this client then does some additional checks and then says, hey, my extra checks are also good. And then some point later, it'll say, hey, this is now the head block and then eventually this is the current block and eventually this is the finalized block. That's the best normal path of a block through the process. A lot of that can happen in parallel so like checking the attestations and things like that. And then the final thing it's going to do is actually do the beacon state route which includes the execution series and stuff and then passes it back one. Okay, if consensus validated no returns that the consensus what was not valid, then we can just throw away the data that Right, you must discard this block. Hey guys, I was just quickly wondering like why is the fork choice like why so much like why is the fork choice stuff communicated to the execution layer and so much detail. I mean, I haven't really looked at this API, you know, ever and like seeing it now it just feels kind of weird that you know like the execution layer should know like all of these details about the fork choice. Um, like the idea for example that the fork, the execution client knowing the finalized. What cash is really useful because the execution client has like different tricks for storing state that basically optimize for making it like really easy to update, but at the cost of making it hard to reverse. David that like you're super quiet, like I can tell you're talking but I can't hear a single word you're saying. I lost super quiet now. Okay, yeah, basically I was just saying that for the finalized block has a particular the issue is that like the execution clients have a lot of optimizations where they basically trade off increased efficiency of reading and writing to this to the state as it is now in exchange for making it harder to like go backwards and revert and revert to previous states. And so if you give the execution clients a finalized passion, so it knows that it's never ever going to have to revert past that point, then the execution clients can use that information to like basically like dump all the information like dump the journal and like flash memory and do all sorts of things that makes for efficient. So I understand this part but what I don't I mean I it's obviously it's kind of important to know if the block is finalized or not but I don't really get why for example it should know that the block is confirmed, because this information seems like. Partial, partial finalization information is still useful like it's, it's a trade off space. So I think about the API. The problem is if you only have the latest block that is very unreliable information might might be even less like confirmed than currently on proof of work. So we want most applications to follow a slightly less aggressive head basically that's why the confirm this in there. So to be clear, you want the head, and you want finality, and you want to update that information atomically. So those are really required. And then this notion of confirmed or safe is a definition which might help serving like Web three API is on on head. And here is not them, but is likely valuable. Here is the list, the just proposed list of the new status is for the block for the Jason RPC new identity fires and Jason RPC for the block. So it could be finalized, could be safe, which means it's confirmed, could then safe, which is unconfirmed and extended by say it's extended with finalized and safe unsafe and safe will be an alias to latest, according to this proposal. So latest will be will always point to the confirmed block. And this is aligned with what we have currently in the proof of work chain because latest always like points to the to the to a block that is accepted by that could be accepted by the network in terms of the book of work verification. And in terms of like consensus or consistent certifications. So this, this is like the same as in proof of stake with the like confirmed blocks with the first testers voted for a block. Okay, I understand it. Good to atomically update a couple of other pieces of information with that. Yeah, okay I understand so basically the plan is to treat this, you know, like confirm block as the like head block. Like the way we treat the head block now and then you know there may be additional blocks after that but they are not like to be used really like I mean you can use but it's not not recommended. Right for the. Yeah, yeah for your average for your average user, latest, meaning safe is kind of a very reasonable default behavior if you're using that the app or whatever. However, if you're doing something like any the extraction or bot work or whatever, then you probably almost certainly want unsafe. But you also know what you're doing and you recognize that you're taking risks and you're building on you intentionally want to build very specifically on the absolute latest block. That's why we return both because both have different use cases for a user. I will remove the suggestion but I will move this kind of stuff to, to this method, just to give more context for the current block hash wise minute. Any other questions on the folk choice update. Just on the sorry I'm going back to that's validated again. So it says, block should be discarded. Is there absolutely no situation where the execution portion of that block may come up again. Like, for example, could the next slot contain the same execution block. And no, no, because we have this rundown stuff. We're going to have this. Well, does the random change if there's an empty slot. Oh, yeah. Yeah, no, no. I guess. It depends on which rundown we use before the current slot, then it makes like. Yeah, I guess there's still the time. There's no way for that. Right. Yeah, we have it on step here actually, which matters. Okay. Should we remove one. Yeah, and just as a heads up we can probably do like another five or so minutes on this side out to be finished. Everything in the next five minutes. Yeah, just to move on to the Felix document as well after. Okay, cool. So, what process and flow is here to illustrate. Yeah, this couple of sequence diagrams which is illustrates how the block will be processed. I should probably add the folk choice stuff here, the focus of data stuff here, clarity. Now we are going to the transition process and which this is a very critical part of this API. And yes, all the transition stuff and all the stuff that is marked as scope transition, including some parameters of some methods will be deprecated after the merge and could be removed from the clients in the next like updates. When the merge has already happened. So we have here, like a couple of the, yeah, we have here the couple of things that will help for the case when we would like to override the terminal total difficulty, or set the terminal world work block, which overrides the terminal total difficulty overrides. Yeah, so these two methods. Yes, sorry. I believe terminal preferred block override would need an epoch as well, in which the effect goes in otherwise everyone would fork a different epochs to the merger. What matters for the consensus client right but what matters for the execution client is the block hash. Right, because it's just going to be waiting for that block hash and then got it. Right and we should have the respective parameters on the consensus client side, because the consensus client rules manage these transition stuff. If there is like a kind of emergency, and any of these parameters are communicated to like some channel and some public channel, the clients should restart should be restarted with either of this one. And they will be communicated down to the execution client when they are set on the consensus client side. More on the reasoning behind this is the reason issue here. Also, by the way, from that to mention that that these terminal total difficulty override will also be used for setting the terminal total difficulty in the normal case. So once the merge work happens, this terminal total difficulty gets computed by the consensus client and communicates it via this method to the execution client. So it will know at which total difficulty it must stop processing the preferred blocks. This is all specified in the behavior. I feel like we should stop here and if we have any time to answer the questions, like we could do this. Yeah, and I guess one thing that might be worth discussing on the discord after is if we want another merge call next week to maybe finish this, you know, going through this like before the to call. Right. We don't need to agree to this now but yeah I think it's just worth seeing that definitely something we can do. So yeah. Sorry Danny go ahead. I say a call before the to call next week is totally cool. I guess yeah maybe we can just figure it out now does anyone here feel like that would not be valuable. We might have some juicy sync API things to discuss after our people said on sync for a week too. So, right, right. So okay let's do that. Let's do a call before, before the to call next week for an hour. Yeah, cool. Yeah, thanks a lot, Mikael for sharing this. And yeah, let's let's keep the conversation on discord in the next week. And yeah, Felix, do you want to give us a quick rundown for your document around the post merge things. I can I can do this. I was actually kind of hoping to be able to like share the document in the screen but for some reason I can't seem to do this in here. I don't know. Okay, I should be able to share it. Give me a sec. Yeah, so I'm really sorry about this but for some reason it's not. It doesn't always work. Anyway. Yeah, but I while you dig it up, I can also just start talking so how we're probably going to do this is like basically I can just talk for a couple minutes about the general idea behind this like sync stuff and like where we're coming from with this. And then after this we can kind of discuss. So I'll just tell you when you know you need to scroll sorry for this in direction but it's I think it's going to be the easiest. So, so basically yeah and Pooja also linked the document so it's there. Yeah so maybe stop here so we can quickly talk a little bit about the background of this. So, a couple weeks ago we had our team meeting. And in the TV thing we I asked Peter a little bit about like his ideas for the swing because he had been kind of busy thinking about it and trying out some stuff. He could be implemented and so on and then he told me about his ideas and we made like some some some drawings and kind of just like basically try to get the like good picture of it. And then Peter basically went on vacation and now basically I'm I'm right now the guy who's you know like basically carrying a torch forward and I suspect that he will come like when he's back he will likely take over. And keep working on this. So this document that I just released yesterday is basically only really concerned with the sync so this is kind of important because when I asked some people for review. They, you know, immediately jumped out and where you know like yeah the discussing like the API that is used in this document and you know like if it matches the real API or that that is going to be used between the clients and stuff. And it's not about this API it's really only about like you know very specific part of the sync which is exactly the sync that isn't processing non finalized blocks so basically the main interest here. Is about the part where the client is you know trying to sync up finalized blocks. And then for the beacon chain is like you know it for it to you know like basically for the clients to be fully in sync with the network obviously there it has to get to the real head of the chain and this. In the end of the sync some. It will you know basically just perform the same operation that it would always perform if it's already sent which is just you know like processing basically you know like very recent blocks. So it's not about this pattern is also not really about like handling reorgs and things like that during this like later normal operation but this is really only about like earlier part where it doesn't have the full chain yet and it's you know like just trying to basically get to a state where can start processing blocks. This is the main importance here. And then basically I wanted to quickly go over the definition so basically we have what you can see there is that I define three operations which are basically calls that could can be made by the E2 client to the E1 client. And you will see these calls all over and it might be a bit confusing for especially for people who are very familiar with E2, because these calls don't directly match you know like the consensus engine API and they also work a little bit differently from you know what you might expect and it will be changed later we already I have the feedback and it will be changed but for now we have the two most interesting calls which are final and PROC. And then the final is basically just for submitting a finalized block. And this is supposed to be called for all finalized block not just you know like when finalization actually happens but basically every block that is moves into the final state will have this called and then PROC is for all the non finalized blocks and these calls they are generally less important in the context of this document but the PROC is still used somewhere so this is why it exists but it's mostly about this one call which is the final. And I just want to make it really clear that it's not doesn't really match the semantics of the API right now. So now we can go basically to the to the E2 section. B refers to a block or a block hash. No, it's a it's a block. This is the terms are actually defined right above this so we have the I but it's probably a good idea to go through it quickly so in this document we have lower case B for beacon shame blocks and uppercase B for execution layer blocks and the B is always a complete execution execution layer block and then we also have age, which is for block headers. So the block hash is actually never occur in this document is really only about like blocks and headers. So just keep it in mind and then the the the the like subscript there is basically just it identifies the block. So we can we can go to the, I guess we can go to the E2 section now. So in the from the it I describe the single two ways. So there's E2 perspective and there's the if one perspective but they happen at the same time and it's kind of important to keep this in mind as well. So basically when the idea here is pretty simple and you can see it by the picture as well. So in the in the first step, when the E2 client starts it starts we assume here at the at the week subjectivity checkpoint and you can see it it's basically the block where it has the pink star. And this one, it has a star because you know like this is like the initial like it has the state of the beacon chain at this block it's available and it's a verified state so this is why it has this star. And basically the idea is it provides this block actually only the header, the execution layer header of this block it provides to the client in the first step. And that's really it there's, it doesn't really need to do anything else. And then the idea is that from this week subjectivity checkpoint block it, the beacon client moves forward through the beacon chain up to the latest finalized block. And it just has to kind of assume that the chain is valid or basically it cannot really verify anything against the execution layer because the execution layer doesn't know anything yet. So it just kind of has to process it optimistically by signatures or whatever. And once it reaches the latest finalized block. And this is the step number three now, it actually provides the execution layer block which is embedded in this block to the one client. And now we can go a bit further down and go to the next part. So now basically, I'm sorry, I have a question. So the first final final be called should be made with the latest finalized block. Yeah, in this case, yeah, you had the question in your document now. Basically, yeah, it's just an assumption for now which it just makes it easier to explain the procedure. Yeah. And then basically, now, since it has provided the final block, it just keeps providing the like finalized blocks as they happen so it keeps following the chain, and it keeps providing the like finalized blocks and to the one client. And then it has to do this, you know, while the ETH one is syncing which will take, you know, like a lot of time. So basically the our assumption here it actually takes like T beacon blocks worth of time to synchronize and this can be quite long. And eventually, when the ETH one is done, it will respond it will basically respond to one of these final calls with the signal that it is synced to this particular block that was just provided. And once that's the case, we can basically go into the regular processing and start, you know, like putting the non finalized blocks through. So basically, after this point when the ETH one says that it's synced up to the latest finalized block, it is ready to process non finalized blocks and this is basically the end of the sync. So that's kind of it from the ETH two point of view. And now we can go to the ETH one. Are there any questions at this point. I have one. And regarding this payloads execution after the sync is done. When we got this message, there are two options. One is the execution client stores all the execution payloads and then then when the sync is done, it just executes them on top of the pivot block. The other option is that it communicates that the sync is done to the consensus client and consensus client replace this execution payloads. In this case, the execution client don't need to store them, but yeah, it should store them right. I mean, in terms of So I was assuming basically that the execution layer is so my assumption is very simple. Basically, the execution layer shouldn't really store anything that is, you know, like totally unverified. And even the ETH two client in this case, it cannot really verify these blocks because it cannot process them because there's no state to process them on. So I felt like basically it doesn't really make sense for the even for the for the consensus layer to, you know, like process or look at these blocks, it can always look at them later, you know, like when it's kind of ready for it. So these blocks don't need to be stored in the execution layer before it has reached the finalized block, because these blocks, you know, might be totally invalid. And they can be real at any time. So it's kind of, you know, like, why would it even care about these blocks in the first place? It really should really mostly care about blocks that, you know, like have, you know, it can actually verify. So this is why I didn't put it, you know, this option that during the sync it will also start providing the like non finalized blocks because you cannot do anything with these blocks during the sync. They are not processable. So just because it was finalized doesn't mean that it verified the state transition either. So, well, that's, that's, and that's another question. So we will. Operating in block headers and just looking at difficulty and making the trade off that okay when I get to the head. That was probably a reasonable head. Because so everyone else agreed and that had a high, the following the beacon changes about execution is probably making a similar assumption. We are, we are assuming here that if the block was finalized by each tool, there's a pretty high chance that it has a valid state transition because these two should not be finalizing, you know, invalid state transitions. Right. The head with respect to after station beyond finality is also, you know, there's a degraded amount of purity, but it. Yeah, operating kind of the same. Yeah, yeah, but this is this this would this is just too complicated for me right now so basically I don't really care about this detail too much for me. I care about the detail because I think that it simplifies thing if the consensus just continues to provide the data normally like here. Here's what's finalized here's the process here's the finalized process and that the execution, no matter what their sync process is when they're at the end just transit ends up with a state. And to what the. Yeah, yeah, we will get to the state so I basically the way I want to do is basically I go through a document in the end we did we discussed so basically. Yeah, so the one perspective is, you know, kind of, you know, like a mirror of what we just had so basically what happens is it gets the signal to start to sync this is the step number one in the diagram by you know receiving the first call to the final. And previously it has also received this checkpoint header. It's the it's the hw and now the idea is basically that very simply it starts downloading the historical headers in reverse. And it does it until it reaches the Genesis block. And when it crosses the checkpoint it also has you know a validation step where it actually checks that the it also checks that the downloaded headers matches checkpoint and this is just a safety net to basically not land on like totally invalid chain otherwise we would have to go all the way to all the way back to the Genesis block to find out that it's the wrong chain. So that's why we have this like intermediate thing we will obviously also verify the Genesis but if it matches the weak subjectivity checkpoint I think we're pretty safe. It's kind of weird if that one's wrong. So and then, when we're kind of done with the headers we can actually download the block bodies in the forward direction so this is the step number three in the diagram. And by the way the text for this is below the diagram so just if you look trying to look at the text is the text that describes all this is actually below that. And then you go basically through the block bodies and here you have two options you can either basically perform the full swing in which case you simply process every block body as you downloaded and incrementally recreate the state. And then the other option is of course the state synchronization where instead of processing it you just download the blocks and while you're doing it you're also concurrently downloading the application state. And we expected, you know, because we're like in the guess mindset we expect is probably going to be done with something like the snapshot. And so the idea is that you will basically provide this and then what's really important to understand it's also in the diagram. And while this while this like the steps two and three are happening, we are actually getting notifications about newly finalized blocks, and these notifications need to be processed and this is this how they are process is described above the diagram. Sorry for the order but basically it's the process is that if you receive a block that exactly matches the next block then it's simply written to the database. And it can also be used for example to retarget the sync to a newer pivot block, which is something that is absolutely required for the snap sync. It's less required for for example the full sync but it's it's really needed for the snap sync so this is why it also has it has implications on the on the on the sync. And then if there's any other finalized block provided then there are two options either the block is you know it is historical block in which case it's kind of you know was provided for whatever reason and in this sync model we don't care about it so we just say it's old and or invalid and then if it's a future block, then we restart the sync on this future block and the idea for this we will get to it later is for the like restart handling of the sync that basically like if the it to client was restarted and now, you know, has reached a different finalized block then we basically just restart the entire sync procedure on the one side and just, you know, like try and basically do redo the missing steps. So now we can go even further down. Oh wait wait wait one second so what you can see I said basically after all of this is done. Basically, you can see that two blocks have stayed in this diagram so one is the HG which is the Genesis block this the state of this is always available. And the other one is this like block B plus F B F plus T, which is basically the like final block of the sync so when this block is reached. We have to guarantee that the complete application state is available and this is why it has the green star in the diagram to show that this is you know the block with the you know final state. In the case of the full sync we actually may have more state and we will get to the question of state at the very end but basically for now what you can assume that after the sync what is guaranteed is that this like the swing block has the state available. And this is kind of it for the if one side because after that it will simply receive you know calls to to process non finalized blocks. Can you know be processed on top of the state which is available, and there may also be reawks and but this is really not the reawks and the singer like two different things for now so it's kind of not really related so we're done in this as well. Now we can quickly skip over the section which talks about the client restarts I don't really want to go into it too much, but I think this is going to be very important for the for the each one client authors to consider these things. So basically here we mostly talk about like how to handle the content of the database when there are multiple sync cycles and how to efficiently reuse the information that was already stored in some previous sync. We have a couple things here. One is the handling of you know like when when when the chain that was previously stored is now like when when you're syncing at a different chain, or on top of one that was already synced and you need to erase the old information, and you can reuse the parts by a way of this marker system which is in the second to last paragraph. So it explains that basically, if we have previously synced a an entire segment of finalized blocks, we can efficiently skip over this segment and not have to basically recheck every single one if we already have it or, you know, basically skip a lot of work this way and I think this will be quite important to implement something like this, especially when we change the sync later or when it you know becomes for example like if it is restart like every time you restart the same basically you need to figure out what to do with the stuff that's already in the database and it's good if it can be reused efficiently. So now we can get to the to the last part which is the real processing. And I think this is actually going to be the main subject of the discussion in the upcoming week or two weeks. Basically, this scheme, what we assume here is basically that because the clients were the clients are supposed to start to sync on the latest finalized block. And, you know, as the finalized frontier moves, they have to also retarget their sync to this block so basically this this this state needs to be available in the peer to peer network in order to be downloadable. So this is why we recommend here that basically the clients should keep this state available in their persistent store. I just argued that basically like, since most if one clients are now moving to the model where they really only store one entire copy of the state and then a bunch of additional information to facilitate reorgs in some way. And basically we we argue here that it is the best to simply store this state of the state of this particular block because it is, you know, like, it's the easiest to handle. And we also described it basically like in order to facilitate the reorg processing. It is recommended to then like keep other information in the main memory instead of the persistent storage because it just makes the reorgs a little bit easier. And finally we get into this part that should probably have way more text. So, and it's a bit of a controversial topic also know it's not the issue section yet know for now we're still talking about the real. So we have the we have this thing with the manual intervention reorg. So basically, the issue is as follows so in the in the current Ethereum one main network, what the there's an assumption in in the clients, especially in guess like this is where we are coming from here. So that basically, there's got to be the safety net for handling issues that arise in the, you know, live network. And for example, if there is a consensus failure in the in the network, and we just had one so it's kind of, you know, a really good example, then it's kind of good if there is a little bit of a time window where reorgs are still possible. And in the get this time window is defined to be 90,000 blocks long. So at the moment, it is basically always possible like the get we're always ensure that it has the possibility to perform a 90,000 block reorg. And the reason for this is not so much that like during the normal operation, these reorgs will happen all the time. Generally, it is not expected that there will be 90,000 block reorg. But the specific case where this is really, really important is if your client version, for example, had a had an issue in its processing because in this case, it will not be able to follow up on the new chain until you have installed the software update, for example. And because of this, you got to have you know, like a bit of a time window to actually update your client and when you do so, it needs to be able to actually, you know, like reorganize back even if the wrong chain has also advanced by a significant number of blocks. And this did happen even with this, you know, like with the with the most recent consensus failure that actually because some of the pools were still mining on the like chain that had the that had the that had the bug in it. It's kind of that basically, you know, like if we wouldn't be able to reorg out of such a situation, then you wouldn't really like you would have to reason basically which will take even longer so it's kind of a good idea to be able to have the safety net. And we would really like to have this. And in order to provide it efficiently what we recommend here is that the execution layer clients should maintain backward diffs of the state in some kind of persistent stores. So basically it should be able for them to reorg below the latest finalized block, even if it is a rare occurrence. But it gives you the safety net to be able to say like, you know, if there was a problem you can you can kind of reorg out of this problem by then applying these reverse diffs to your persistent state until you reach the common ancestor of the two chains and from this point on you can then process forward to get to the good state. So this is kind of we feel pretty strongly about this and we would really like to recommend this and as we will see just now it is also probably going to be required to do something like this. So now we get to the issues. So the main problem that we discussed right away is that actually everything I just said is you know like totally wrong because finalization doesn't work in this in the way that we in the get team you know initially understood it so it's kind of we were not aware that actually in the east to consensus finalization is something that can you know take up to two weeks in the worst case. So what this means for us is that our current scheme of you know like persisting the finalized block will actually like this, we cannot just use the latest finalized block as the point where we store the state, because then we would have to keep up to two weeks worth of the state on top of this in some other store. And we feel like this is too much. So, we haven't, we have basically been thinking about solutions last couple days how to really do it. So, and what we find is that basically, probably we're going to have to adapt to think a little bit to add this notion of the calcified block, which will usually be the finalized block but it may also be an unfinalized block. And adding this calcified block will have a lot of implications on the same because yeah like it basically makes the whole thing a lot more complicated. And I really invite you to like, you know, look with us through these issues in the in the upcoming weeks and figure out how we can solve it in the in the best way. I don't have a solution for this but yeah for now basically, we would really like this thing to work in the way that is described and the rest of the document but unfortunately because the finalization can take so long. It kind of like, yeah, means we have to do some more engineering to really figure it out. So we have reached the end of the presentation now if anyone would like to ask some questions I'm really happy to answer like everything now. Thanks, Felix. I have a question first. How much space do you think those deeps for these two weeks will take? You mean the reverse diffs. Yeah, we don't really know about this. So this is generally something that I that we need to discuss. So the problem with the reverse diffs is that it's, I'm actually not sure it might be that Aragon maybe someone from Aragon is here and can comment you know like how they handle the reorgs I think they might have something like this already implemented. Hi, it's Sandra from Aragon. Yeah, we have reverse deltas so we can really implement reorgs by implying reverse deltas. Yeah, so the question is just you know like what's the what's the usual size I mean I guess it's the same size as the forward you know, approximately. Yeah. I don't know off the top of my head like Peter would know but I don't know what is the usual size of the diff of each block. It's it's it's I think it's manageable the question I mean it's definitely going to take some this space I don't really know like what's your window in Aragon for these diffs at the moment. It's configurable. We even have the mode like in the archive node. We don't run anything so we have deltas for the entire history of the main net, and it takes roughly one and a half terabytes to like for a full archive node with running it's so we can configure it for something like 90 k blocks as well. And then the total database size will be about half a half a terabyte but I don't know off the top of my head, how much of that is the deltas the changes well that's that's that's pretty good information it kind of matches my expectations as well. Okay. Yeah, so there you have it so I think it is it is manageable to like if you know Aragon actually has it already implemented like this then we can definitely say that like this isn't this is a manageable approach with the reverse diffs. It does mean that the reorgs below below this point where we where we keep the life main state, they will take a lot longer to apply because you have to basically, you know, like adapt the state incrementally for each block. You can't really skip I mean you could store larger deltas but then that would take even more space. It's probably new for him not speaking so yeah. Yeah, thanks. The other comment that I have is regarding this period of finality. What could probably be used is the like if the consensus client communicate the like finalized the most recent finalized checkpoint the most recent finalized block and the most recent epoch the most recent the block at the most recent epoch boundary, each time this boundary happens. It could be used to like to handle this kind of non-finality periods if they're too long. So the execution client may see these two checkpoints and decide what is the like distance between them. And I think it makes sense for this calcified block conception to use the blocks and the epoch boundaries at some follow distance from the head. So this is just basic thoughts on that. Also, we could use justification stuff justifier checkpoints but I assume that if you have no finality, then we potentially don't have the justified blocks but it could be also used so if there is if the justified checkpoints is much is much closer to the most recent epoch boundary could be also used as a pivot block. Yeah, so the details there are like interesting but also like for us is more like the main thing that we want to achieve is basically like we need to have some kind of threshold defined. It doesn't have to be very smart about the threshold but the main problem is just it needs to it needs to basically not be further than like, you know, a couple hundred blocks from the head. So anything that satisfies that is good enough and I suspect that we're going to have to calculate this on both sides. So I think it would be easier to just make it like a very simple definition so in my definition I just put you know like it's the finalized block, or it's some block which is, you know, 512 blocks away, if the finalized block is older than that so just maybe just puts a bound on that. And either way, this the change to the calcified block will have huge implications because it basically requires that during the CERNC reorgs need to be need to be handled, you know, like in some way, there are some cases where the reorg is not possible during the CERNC due to constraints on the state. So it we will have to think a lot about these cases and also the in general it's kind of like a bit messy because we're going to end up in a situation where like, since the calcified block may not be final, it can happen that even during the normal operation we will have to invoke this like emergency reorg procedure which will take quite a while. It's still, and then basically we will also have to put like really hard requirements on the clients to be able to satisfy any reorg between the finalized block and the calcified block. And obviously how they implemented this kind of you know up to them but it would be good to have a recommendation that actually works. And for now, what I know is that not all clients are able to have such reorg. It puts some like, for example, in the case of Aragon is like, yeah, it's configurable but it will no longer be configurable in this in, you know, for anything after the merge, because you will have to provide a certain number of these steps so you basically have to restrict the user freedom there because otherwise, their client will not be able to follow the chain correctly should the situation happen, and things like that so it I think it has big implications on the clients the dislike, adding the calcified block. We were certainly not prepared for it when we were discussing the sync initially we were kind of thinking that we're going to get off really easy with this you know like finalized block, but it seems like it's not not not so easy. So I see the value or see the like practical engineering need for handling state in these times of nonfinality and having things that do not go to the depth of finality. I do note that in the event that you didn't have finality and in the event there's some sort of attack scenario network partition that if reorgs beyond the calcified state or very expensive then all of a sudden that actually becomes like a place to attack. If you can get the chain to flip between states that are beyond the calcified state then you've now like grinded most clients to to a halt, trying to do that expensive reorg operation from this. So there's, I know there's like very much practical engineering considerations here but there's also probably security considerations that need to be discussed in tandem. Yeah, I would. I would also like to note that basically like my first reaction was that you know like we should rather change the to to basically make the finalization a bit more reliable but I already heard it from like multiple people that unfortunately it's not going to be possible to change it to for this. So we're going to have to I guess find. Yeah, that would be like a solution. Well, this is a fundamental consensus property that you can't have that. Well, maybe like a lot of finality you could, you could be in a different mode I suppose, but as long as the chain well but if you want an available ledger and you have you don't have another choice. No I get maybe one comment on that as though maybe one one possibility to what Danny just said would be to make reorgs beyond the calcified block manual because I mean I reckon when you are in that mode. You would probably still say, yeah, sure, reorgs that large can happen, but there's a high probability that it is actually attack if that's happened if that does happen. So you might actually want to use the intervention to pick pick the pick the fork in that case, and it could even. Yeah, well we could definitely specified like this. And Felix and I discussed that maybe when you do trigger that type of reorg the execution client responds and says, that's really expensive, are you sure. And then that can either be triggered from annual intervention or the beacon note even trying to get better information before it triggered such an expensive reorg. So there's maybe there's a lot of different like tradeoffs on that spectrum. That's also like I mean a question, like when you say really expensive that's that mean seconds minutes hours, I mean that makes a big difference there. Again it's something different depends on the on the on the implementation of the state and it implements it depends on, you know, like, I mean, I, again, since basically only arrogant has this exact system implemented right now so it was kind of you know like I wrote it was kind of inspired by how I think their their stuff is working seems like mostly works like that. It's kind of that basically, I think they might be able to give some context you know how long it actually takes to like reorg for example 10 10,000 blocks. I don't know how long it would be great to get those numbers. It's just it's but again it's not really going to be a guarantee because it it's highly it's highly dependent on the actual on the client implementation how it is able to do this processing you know like what's going on in the client at the time. We cannot really say I think it's definitely not going to be on the order of seconds, because reorganizing many blocks in this way basically just means like a ton of of rights to the disk and yeah I mean you can always cash some things and optimize some things and it might be that we eventually get to the point where you know that this stuff is actually kind of you know like fast, but we can't really say for sure I would just basically really like to assume for now that it's an expensive operation, because if it would be so quick we wouldn't yeah like I don't know, we have to see. We can probably generalize that's rewinding and blocks is similar in time as going forward and blocks so if like a block is processing in 100 milliseconds that's probably your rough estimations. It's not the same right you only need to write that this you don't actually need to like because in executing a block you always need to read and then write depending on what you just read. And whereas this one you would already know exactly everything you have to write, so I don't know what this implement, but I can imagine that that could be a lot faster. But it's definitely there's no even processing involved. I was just talking about the EVM processing and what I'm saying there are no round trips involved. Like you, you could tell the best. He has everything you'd have to write do it. Usually would include diffs that were in memory whereas applying there's reorg backwards is going to be reading from from disk. And so I think that's one of the main time considerations based off of, instead of doing reorgs past the calcified. And in the worst case we will have to like in the worst case, I mean, like we have this 990,000 blocks and we'll have to re execute them like from the last latest finalized checkpoint. So it just depends on the time of the execution but it's just a few hours to my. Yeah, so Maris there also has a good point in the chat. So basically we also we already have this kind of optimization implemented in the geth as well and it's definitely applicable here so like if you need to do like you know, basically really large backward movement on the state. It's also possible to minimize the number of rights because you can just combine multiple diffs into one into one in the memory before writing anything and doing this usually saves quite a bit of time because there is. The state has kind of high turnover so you may be able to skip quite a few operations if you just basically instead of writing out every single block backwards, you can basically skip over some and hope that you know the diffs kind of cancel each other out. It's usually the case so it's like something something else to keep in mind. I don't think we have to discuss the details about this too much. If anyone has like more high level questions I think the scheme is pretty easy to understand. I don't think there's a lot of new information here but if anyone has something then we can answer it. So I want to, I want to just have one comment I think that as we consider this design that it's important to consider it to. We're not writing like a very ad hoc communication protocol between consensus and execution for this particular sync and instead are writing something that generically provides the adequate information to support underlying sync method so that we don't like design this to be pigeoned to the particular thing that we're dealing with. And I have some idea for that. And I think that generally what you've written can be adopted to that, but I just, I think that's a good design goal. Yeah, so for now I will keep this like the operations that are being used there I will try to keep it a bit abstract because I think it's going to be really easy for us to later change it to the like you know, like map these onto the like real operations. You guys have a lot of good ideas I already check out you know like the API design document it is, you know, like there's a lot of information available from the if to note that can be used also during the sync. And for sure we will have to make use of it when we redesign it for this calcified block for example we will likely need you know like some notion of like what's the current head of the chain and things like that. Yeah, we will work and sending all procs during the thing as well as finality, instead of just sending finalized information. Yeah. Yeah. Cool. Any further questions. We're kind of that time for this just because we have a few more things on the agenda and only 10 minutes. I guess we can continue discussions about this obviously in this board and yeah I, perhaps on the merge call next week. I'm not sure if I'm kind of thinking, maybe Michaels doc will take the full hour but I'm not sure if maybe doing like half the consensus API and half this makes more sense. I don't think we're going to have super big updates for it like next week. Okay. Yeah, I don't think it makes sense to discuss it you know like over and over for now because basically just a matter of me updating it for this like idea with the calcified block which I will do some point next week so yeah. It's valuable for Ergon who I think generalize on differencing protocols full sync and these rewinds and things to think about how they're going to be doing it in this context and see what what overlap and what differences the requirements need. Right, yep. I would, I would really like you know some more feedback from especially from the ETH one client author so this is kind of written I mean like we have written it from the like geth perspective we know we can implement it like this in the geth. But you know like how it's going to be for everyone else I don't really know and this is specifically about this later section which is about the real processing and the state availability like this stuff really touches you know in the core aspects of the client and we hope it's something that can be implemented by everyone in some way, but we have to like this. I think it's more a matter of you know like agreeing among the ETH one clients how we're going to do this. And so it's important. Yeah, for you guys to basically check it and think if it makes sense for you or. Yeah, so yeah let's definitely discuss it in two weeks. Once yeah different client teams have had time to have a look and you've you've made the updates Felix. Thanks a lot for sharing. This was pretty valuable. And the last kind of big thing we had on the agenda, which apologies will probably have to do a bit quicker and we can also discuss again in a future call is EIP 3756. The last limit cap, like client you put this together do you want to maybe take a minute or two to give the context and high level overview behind it. Sure, I can keep it pretty short as well. So, setting some sort of in protocol limit for the gas limit has been something that people have wanted to do for a while, it was originally a part of 1559 and then removed and then in March of this year. It was 3382 that proposed a hard code the gas limit. And I think the 3382 failed for, you know, the main reason it failed was because it didn't allow minors to reduce the gas limit in the case of some sort of attack on the network. You know, building on top of that EIP the next to the next plausible solution would be to just have a upper bound of the gas limit. And that's what 3756 is. It caps the gas limit a in protocol defined amount, and it allows for minors to still lower the gas limit in the case of some sort of attack on the network. And the main reason that you want to cap the gas limit is that right now block opposers have full control over what the gas limit is. And this allows them to to bypass the EIP process and awkward depth process and in making decisions about the protocol that could negatively affect the decentralization security of it. Right in one bit of context I think I would add is when we had the discussion around I forget the number but the previous EIP the cap the gas limit. One of the arguments against that was kind of backwards looking saying you know minors have historically always been aligned and and you know like they've done a good job so it doesn't make a lot of sense to remove this, this, this degree of freedom from them. And I think over the past couple months we've seen like you know there can be external incentives like tokens that pop up to gain this, especially as block space on Ethereum becomes more and more valuable. So yeah I think the kind of reasoning that we had around like well minors have always been good in the past might not hold forever like looking forward basically if there's more and more incentives for people to to try and influence that process. I guess people's general thoughts are on this feel free a couple hands up. Alex Blasov I think you were first. Yeah, well, my question is some more like for consistency of this EIP, which was proposed in a very short form without any estimates on what the number of state grows what is actually the factor which potentially affects the security of the network most. Like, I couldn't find any different questions in any way and anyone's work in your blog post or whatever. Like, is it indeed that latency of the disk access is stopping is like, is the point which, which is like, is the most vulnerable in processing the new block. Like, what is the state growth rate, what is what can be called acceptable state growth rate. And then like what the state growth rate per clients because as I was quite surprised to hear in a merge call well I mean I cannot make a good contributions there but still very interesting for me that it's now was kind of implied that the client will behave in a some way, regarding how the state stores the data. And, like, it means that it's break it's like, in the future all the clients will behave in a similar way and potentially it will bring everyone state down to the size of what's a current state growth has at the moment, which I believe is the smallest one. So, like, is this current form, it was very hard for me to react to this in any form so I would just ask to extend it. And it would also affect obviously the number of the current limit. But I'm just curious and more consistency maybe it's just not in the EPS but there is already some analysis would be great to see it. Yeah, I'll just get to the other comment or like kind you want to. I was just going to say briefly, you know we can add more to the IP I think there's a lot of benchmarks that have been done generally, and we can we can add more things to it, it was just something to propose the idea quickly. Cool. And yeah, just because we're almost at time. There's three more comments I think we'll take those and then we'll wrap up. I think it was Ansgar Andrew and Marius so Ansgar you want to go first. So, I only like a specifically brief question, as you're saying right like that the, the motivation here would be to just make sure a spoke space becomes more and more valuable that like minus basically don't succumb to the temptation at some point to like to to abuse the control there but the, the situation is just that right now we plan on the next hard fork with any features to be the match at which point that won't be minus anymore. So I'm just wondering is this still a concern. Like for proof of stake. And if not, if this is really mostly about minus. Do we plan on in case we end up with like a December ice age fork and they add a general pebri match or something would be considered this year P2 basically be included in the ice age fork, because otherwise it seems to me to not like, like this would be the I would say that this mechanism is right for abuse for any set of actors that can control it. And I'm not, I'm not claiming one way or the other on this I don't really want to get in there but it is still contextually and if it's an issue with minors it's an issue with stakers and if there are mechanisms that can be designed to incentivize the minors to do certain things that same exact mechanism can be used on Yeah, and regarding the including an ice age fork the way it's written to also be included as a soft fork before the ice age. Cool. Um, Andrew. So, I think the weak consensus in the arrogant team is that we are against this, this change, but we are not going to die on this hill. Personally I think it's bad for two reasons. First, if it requires a hard fork then it will distract us from the merge. And second is that currently the fees are very high on the theorem so if the guest if the minors or the validators raised the guest limit reasonably, then it puts some pressure on clients to like perform and maybe make some architectural changes to keep up and I think it's a good thing. Thanks for sharing and Marius, I think you had a comment also. Yeah, I think that's. I would just like to react to that I think that's a bad argument that we would force current clients to change the architecture by increasing the gas limits. I think all current clients are looking into increasing into changing the architecture in in similar ways as ever gone dust. So the people are already looking at it, putting pressure on the teams is just not going to increase the speed in which this is going to be implemented. And the other small comment I have that it's currently, in my opinion, it's not about state growth with the current gas limit it's about does. All of you are aware way but where but like there are some those vectors we found some those vectors recently. And it's pretty hard to measure this. And so, yeah, I don't think this this this parameter is extremely dangerous and it should not be in the hands of people that are not familiar with what's really going on the network. Thanks for sharing. Yeah, just because we're already past time, you know, we can obviously continue this conversation on Discord and bringing it up on a future call. I think we have kind of some, you know, definitely areas to look at. I think, Pooja, you had put a couple EIPs on the agenda. 242364 and 2464 which are basically the 64 and 65 protocol EPS. As I understand the issue is like both of those are shipped but the EIPs are still like in draft right. So the main issue here is a EIP 2481 which is for E66 that is in the last call actually the last call duration has also passed and we would want to move that to the final status. But the problem is that proposal requires E65 which is 2464 and E65 requires E64 which is 2364 and both these proposals are still in draft status. So we would want that these two proposals should move to the final before we could move EIP 2481 to the final status. I'm happy to make the request to request a status change is just that we wanted to make sure that it is in knowledge of get team if anyone from get team wants to volunteer and do that. Fair enough. And if not, then we can do that and we would be just authors approval. So thanks for the initiative. What I can say is that I think the last time we tried to do something like this that was like a huge amount of backlash for some reason because then people came on and you know like wanted to actually see some justification for these EPS even though they are like four years old or something. I don't really know like this is definitely something that I would like to avoid. So I feel like these EPS, they have been, you know, we don't even use it 64 anymore. It's already like past, you know, it's basically it's already happened. Like there is really no reason not to move it to the final because it's not even supported anymore. I mean the mechanism in it is obviously still supported because it's carried over into the new protocol versions but the protocol version itself has already advanced like beyond it's like already like after its end of life now. So I feel like it's like from this point of view, moving these EPS to the final is like totally justified and I really just want to avoid getting to the same kind of weird discussions again that we had last time where someone then wanted to like you know for for reasons. Yeah, yeah. So let's just try and like, if I think Peter is the author on both so he'd have to actually accept the change if I understand but like. Yeah, yeah, I'll make a pull request to move the status from draft to review that would be the first step. Guys, can you just commit it to the final. I mean, Okay, we have some EIP editors on the call if they have you I just have pasted like a link to the earlier pull request which we created for EIP 2481 where we received some comments from EIP editors mentioning that these two proposals should be moved. So if we can directly make it to the final happy to do that. I can answer that quickly. I'm not, I'm not a fan of skipping straight to final because it encourages people to drag their feet because if you drag your feet long enough eventually you can just avoid the bureaucracy. And as much as I hate bureaucracy, I don't want to create perverse incentives for people to just like not to go through the process knowing that if they wait long enough they eventually they can avoid it. So I want to avoid putting in review. So can we please can we try to avoid putting it in the review. I mean it's okay if they move to the last call or something but they like the review seriously this it's inappropriate because what is possibly going to happen. We just put it to the last call and then maybe even wait if you if we have to I don't know it's a special case. Let's talk about this discord only because I suspect 90% of people remain in this call probably don't care. Yeah, okay. Um, and okay so last thing, and that was it in terms of content but next Friday, same time as all core devs 1400 UTC we're going to have another call to discuss wallets support and infrastructure support for 1559. Yeah, so if you are kind of an application or wallet or just generally interested in, in kind of broad adoption for the IP 1559. You can join that will post the link in all core dev and there's an issue on the Ethereum slash PM repo for it. And yeah, that's pretty much all we have. Thanks everybody. I appreciate you staying real quick. Yeah, last thing if you are an application developer. Please update your web 3js. Oh, it looks like it may be causing some issues with MetaMask it's supplying some different priority fees which are incorrect so make sure to update to the latest version. Right. Yeah, if it is a pre pre 1559 work 3js version, it doesn't return a 1559 transaction so that means you get the gas price to set the both the max fee and max priority fee. And that basically causes overpayment for some users. Thanks for reminding Trent. Cool. Well yeah, thanks a lot to everybody. See you on two weeks. Yeah. Thank you.