 Okay, so I think we should start. Okay, so today we have this doc to walk through. And about the format of the call, we will just go through these documents. If some breaks for discussion and so forth. Yep, I would like to like directly edit and add more info to these documents right away. So, let's just, if there will be like a rough consensus around some stuff that is not an option or won't be on go to a standard of the API. Eventually, so we will just make the comments and drop it. The more important part I think would be to get more input on some stuff that is missed here that might be missed here like sync process. We will stop for it in the middle. I guess the sink is not like explicitly mentioned here as a section. But it should be, I guess. Yeah, for. Yeah, that's, that's like the intro is like, yes, you please don't hesitate to stop me at any point to make a comment or ask a question. If anyone rises hand I might not see it in time so just break me and go ahead with your stuff. So, let's, let's start to any, any questions I guess. Thanks for answering on this edit mode. And thanks everyone for coming. Thanks a lot. Yeah. So, the first thing we will start from is the title. There have been a discussion in the discord and all products channel so to, I think we should do this, like rename this API to the engine API so the reason behind this is that we have the consensus API, which is the, the API of the, that is exposed by the software on the consensus layer, which is the, we can know the API, which is the users and by their clients. We also have the execution layer API, which is the ethereum jsrc has several names, namespaces, we are not about to rename those namespaces, but in general, this is going to be like referred by the execution API. And yeah, we have something in, in between those two layers, which is more specific to developers than users. Like engine. I think it's reasonable. The choice here to avoid to avoid the confusion and also engine is a bit confusing because like client developers may say that they have the consensus engine in their architecture design and they have this execution engine. So it's less like confusing than the other choice that we might might. The alternative is maybe execution engine API engine API for short like the namespace, but then that kind of has a conflict with the execution API. Right. So do we, does anybody opposed to start using this term in application to the set of methods that is explained in this doc and that will go to the standard. Can you drop a link to the page in the chat. To this. Yeah, it's already there. No, that I miss them. It depends on when you join on whether you can see it. Oh, yeah, yes. Yeah, thanks money. Looks like I got one for someone. Okay, so I think we're good with this name. Let's move on. Yeah. Okay, so we are. This is the design space. We will shape shape it at most stuff remove some stuff. Okay, so in. Yeah, we are. We want to reuse as much of existing J service implementation, which is obvious. But like the reason behind this, the security of this API is critical and it's been agreed that it will be exposed on the independent work. I don't think we should stop here for any discussion. Also, we are picking up the new namespace name. And we keep this idea that was proposed by Proto to try to reuse the set of methods for the execution client to try to use them on on the way to solutions in the rollups in the clients that will be used to run those rollups. So, but anyway, the way I want first, and what can unify and what we can reuse on there too. Yeah, we'll be like figured out and yeah, but let's keep it in mind. I think it's pretty reasonable. Okay, so the encoding part. Don't think we should stop here. It's just you will use the. It follows this like logic. So let's reuse as much of the existing jsnrpc as we can. So we'll use the encoding that is used by jsnrpc. It might be convenient to some places. Like anyway, it's, we have libraries, we have like interminations channel this. So, yeah, coding series so the minimal set of methods. It's been derived and the bit extended and modified from what we had in the in during the Iran is. So let's move through all of them. So there is the assemble payload. Just the new name. payload is more specific and more like sounds to what we have. So it's just builds the payload and returns back to the consensus client. So the, the addition here is from previous relation to the random. But it's, yeah, it's just passing the rental makes to, to the execution engine to that it into the block, there will be a corresponding EIT to describe the specification on the execution layer side about this and what was added recently is the current base. And there are comments and there is the like a kind of proposition was exposed by Micah. Sorry, Martin. Yes. So regarding this method assemble payload, which I understand to be basically, please collate the block for me and put together transactions and the way it works now. One is that many, as soon as we have a new head, we want to start mining immediately. So we start working on the, on the empty block. Then, while that is happening, the client is actually trying to find the best transactions. And for MAV guess, I mean, they're working on finding the best set of sequences of, you know, these bundles, and they keep working on it and improving it from time to time updating the work package. And I'm thinking if having just one method called assemble payload might not really cover. Then it leads to the situation that the, the two dot oh sorry for using the term client calls the consensus API and it needs to make a subjective like decision can I spend 100 milliseconds on it can spend 200 seconds on it. So I'm thinking it might be better to first tell the consensus layer here's the latest tip. And yes, you are the one who's I'm going to ask you to deliver a new block on top of this in a little while. Because I think we know that when we, when we have the new block that the next one will be our slot. Later on, we, we, the two auto client can could ask like, could you give me the new payload now. For the block that I already told you to start working on something like that maybe a bit more. I don't know some kind of interactiveness might be. You basically have two points in time one, which is when the consensus client knows all the information that will go into the next block, but it's not the last point in time where it's reasonable to receive it. And you have another point in time later where it's like, okay, now is the last chance to give me a block, right. I don't think that was quite what I meant. But I mean, it's like, if we're given a one off chance to produce a block on top of some other random block, there's going to be like a startup costs to to just, you know, figure out the order and stuff and then, then there's this iterative process what, what transactions should I fit it with. And if we want to, you know, if the most pressing issue is that no we need to deliver something now. Then obviously we need to, we're going to deliver an empty set of transactions. Right. So I guess what I'm thinking is, is we have that set of details, we can't start building the final block until we have those like four things right the parent hash timestamp random and the coinbase. Coinbase presumably you might know long in advance, but like the timestamp and the parent hash or the parent hash definitely you don't know until some short period of time right before you're going to build the block right you know because it's relative to the slot it has to be fixed to the slot. So function slot and you know, you know you're going. Sorry, but the parent hash you know is going to be from the prior slot so you have, you know, the block is coming in attestations are coming in, likely in the four to six second time frame of the prior slot you know, certainly, without in almost all cases that you're going to propose so you have that information. So my, my gut here is maybe you have the method that doesn't instantaneous like get block which is like the very basic method and then you have you know an alternative optional method where you can pretty much signal start work. And then, if you signaled start work before you're going to get a better block when you call get block and if you call get block without having done that, or prepare block in. Then you're going to get something more instantaneous, or, or a client underneath the hood could leverage that they're always trying to make a block from the previous tip, and not leverage that extra method but the extra method maybe is, you know, an optimization. Yeah, and do the does the is this optimization really critical is, I don't know. As for me the strategy when the consensus client just starts to call the assemble pillow to be in advance and calling it like several times and fix the latest block that it got from the client when it when it is time to propose the block. So you're saying the alternative could be, and I think we had this conversation many months ago, the alternative could be. Once I know I'm going to want to block I just call a simple block over and over again and the execution engine knows that it should keep trying to make a better block. Because you might call it again. It just makes a block. So it makes it by upon request so it's pretty simple for execution client to decide so it received the message. How resource intensive is it to try and build that optimal set of transactions. So it, there will always be improvements that can be made because the pending pool is constantly changing. And so, even, you know, in the last millisecond, you can get a new transaction and the totally changes what the optimal block is. And so the optimal strategy for a block prepare or block builder is to always try to build a better block every opportunity you get so as fast as you can, you will better and better blocks. And as soon as long as transactions can even come in which is basically always happens. There will always be some improvement you can make. And then just one more note. So make you mention that the it sounded like you meant that the this method could be called multiple times and we could get incrementally better. But that means. Well, how should the execution layer know that okay I can stop. I can stop working on this now because it's officially after time stamp. It may finish like the work. Yeah, it may finish like this work on. I mean, each request will have a corresponding response. If it's not requested the block is not being built. Right. But if it's, if the working is scoped to the, to the lifetime of the RPC request, then you're saying it should not continue working after the RPC request dance. I thought you meant it would right so actually we had this conversation with Peter months ago, and the, the answer that we had come up with then and I think there's two sufficient answers one is that you do an explicit layer and then you get it when you want it. And the other is, you call a symbol payload, whenever you're ready and you get you get a response, but timestamp is in fact in the future. Time stamp is the slot that you're going to propose that, which is either immediately now, or is, you know, four seconds, six seconds lead time because you know you're about to go. And then there's six seconds lead time, you could on the execution layer know that timestamp still in the future and they're likely to call this again. And so that could be your signal to continue to do work and or not. So I think you can, you can either leverage this method that way or you can do an explicit prepare and then and then I get. Now I get what what two alternatives do we have so one is the execution client is responsible for building these blocks and soaring them, returning the one that this latest upon upon additional request, or we move this responsibility to the consensus clients which will to do the same actually. I think if we have this functionality already implemented in the execution client that makes sense to keep it there, and just provide the additional method. That was what was suggested by Martin, right. Yeah, and Martin, there's a desire so there's a functionality to kind of always be building the best block, but because we now know if we're going to actually want the best block soon, then we might as not always be building the best block and instead on demand to be building the best block. Right, because you could just keep it as is and just always be working on a pending block rate. Does my correct understanding that the consensus client has a single point in time where they submit a single block to the network and they will never submit a second one. Correct. For that slot for that. Right. So so there is at least the contents of this client knows, like now is the last chance for a block right. The, how much like latency is acceptable there for that communication so like between the time the Clintons client says okay, like, I want to submit a block to the network right now. If it takes 200 milliseconds to actually get that response from the execution engine, is that okay or is that going to be a problem? They're much better off broadcasting immediately at the start of the slot rather than like starting to do their work at the start of the slot. Okay, so the incentives are such that that you know what you need to be preparing so you should be preparing and then just actually get it out on time. Because I think, I think the the prepare payload plus get payload is a lot clearer because it means, you know, prepare you can start. Get payload that is you just deliver what you have. And if we have just assembled payload, then it will be this, you know, right, trying to measure how long have I spent time doing this. Oh, should I deliver enough and wait another 15 seconds, blah, blah, blah, which I think I agree. It adds some statefulness here but it as long as get payload can be operated without statefulness, you know, without a previous prepare and it would just give you something very quickly. I think that's a reasonable trade off between statefulness and not. Also, one question here. Well, how do you be like the client decide when to like stop building one block, or it's just constantly if it's the one more transaction at all, like build yet another block and so forth. As long as you haven't done a get for the prepare, then it would continue to try to build. Maybe it's like up to one transaction. Am I right I mean to. Yeah, it should be related with sheep to execute one transaction at all. But yeah, I don't know I mean, yeah that's how often the block is the new version of the block is being built. That's like my question question. If we send this prepare payload, how often will the block is being will the new version of the block is being built by the execution plan. So what is the condition here to start building a new one. More of a modified one if I understand correctly. Is this new transaction arrives and the new block is built like with a new set of transactions. Yeah, so it depends on the particular minor and each all the minor strategies I believe and Martin correct me wrong here that get just every three seconds builds a new one while there might work has been worked on like somewhere else. If you're running like any V geth or something though. It's depend again depends on the particular minor but there are minors out there that are building constantly. There if this may be new block stuff that they're working on, you're actually getting full blocks from third parties that you then compare against the existing block you got from some third party. I think from a design standpoint we should assume the blocks are basically being constantly produced at maximal velocity through some potentially distributed network on the back end. So from the consensus clients perspective, you know that you know on other side of this API, some amount of work is continuing to be done, just until you we call assemble block and someone's working real hard back there is what I recommend designing around. Yeah, I think we should expect. Yes, I mean, regardless. Regardless of exactly how that page right now I think make like your. That's the right model to have. agree. Yeah, so that was the reason kind of a question that we might want to have like a new strategy in the proof of stake. Well, I mean for the block for updating the block. Okay. So, the consensus and if I understand correctly doesn't actually care about any of those intermediate blocks right the only compare care about the very last one. They don't need to pull in between because they know exactly when they want to get the final result, or the best result. But I would say that like three seconds might be a good, a good one for work but yeah with certain segments per block but here we have much less like twice as less. Right and maybe get there, potentially not even following the same strategy so I think you should expect kind of a lot of innovation layer and expect potentially continuous work. And distributed and I think that's, that's key to keep in mind like the thing you're talking to the execution plan you're talking to me just validate the block at the end like they might not actually be building a block they may have this work farmed out to third parties. Yeah, I suggest. Okay, yep. I mean, the Coinbase argument is it possible to have it only as a hint so validator can suggest the Coinbase but the execution engine can decide to suggest different Coinbase. There are some scenarios where we don't have a matching one to one between the validator, like we can chain and the Ethereum one operator, but some some scenarios where you have market where the block builders are independent and can decide to have different strategy of Coinbase and splitting the cash flows of the transaction fees and the, and the rewards. Yeah, that's a good question. I think it should be an option for overriding the Coinbase sense, or, like, not doing the override I mean the option in the online interface for the execution client so which will express to say that the Coinbase will be overriding, even though it's sent over this method. It's written by default in the dress around this note. So if you need this setup. I don't know if it's even appropriate to override it. But probably in some cases, it is necessary to have this kind of option. But I think it was. I think his question is in the other direction, because you're saying there may be a default Coinbase and the execution engine and this will override. He was saying that the execution engine could say I'm not going to listen to you. And use my own, which I don't think should be it should be designed in that scenario. And I would have thought that it was optional in both directions that it's reasonable for someone running an execution engine to say actually not I'm I'm owning the Coinbase I want these rewards because I'm running the engine I'm paying the costs for this. It's also reasonable for a beacon node to not know what the coinbase should be so it doesn't supply one and and expects the execution engine will provide a default. I'm strong on that it seems to provide the flexibility that they kind of make sense in terms of working through the use cases. I think that in a world in which we have a proof of custody for the execution of the of that layer that you essentially like the execution engine cannot be outsourced at that point. It needs to listen to what the directives are and that if a market is dictating that if the market makers dictating that they can set their own Coinbase and that, you know, that needs to be negotiated beforehand rather than not listen to at that point. I just, I don't think it's very clear, but there's a lot of active research working on making sure that you can't outsource execution like this and that you actually as a validator need to execute, even if somebody else is providing the payloads. And so I think we need to consider out the design. I think that last thing you said is critical there. The, we definitely want to prevent people from outsourcing execution clients, but I don't think that means we care about people outsourcing block production and the Coinbase reward may make total sense to go to the block producer and not to I agree, but the consensus layer needs to like the consensus client needs to have actually known that they're entering into a market like that because it cannot validate like if it thinks that it's providing a Coinbase where fees are going to go to it can't it won't validate whether that information was actually respected or not. So I think that it's, it's very strange for the consensus layer to think I'm going to get these fees and then pass it along, and then not have actually gotten it, even if it's doing all the execution and stuff. Whereas I think if you were entering into a market where like that would be bypassed and you're you should have configured your system in such a way. So the problem there is that means that we need to know like we'd have to negotiate the Coinbase before you produce a block where we don't produce the block like if we're doing a market for block production. The block producers are coming from all over and you don't know what the Coinbase is going to be until after the block is produced. Like we have an order there that's going to cause problems I think. Right so maybe Coinbase should be a configuration is it requisite that builders use Coinbase to actually pay. Is there something there. Because you want to be able to make your block generic to. I think there's some like economic arguments of you know we want to encourage in theory if you're east actually be used for payments for transaction fees and Coinbase kind of helps nudge people in that direction doesn't enforce it. There's also like because there's a Coinbase opcode it gives people submitting transactions a way to pay the block builder. And so it gives a an auditable trail of payments from the people are spending transactions so you don't end up with like a layer to payments or some off other channel payments for this stuff. Again these aren't super strong arguments like an ideal breakers but it just helps the transparency and stuff like that. So let me see. It's untenable for block builders to wait for prayer block before knowing Coinbase address. Why is that the case. So if the, if you have someone submit a transaction and they want to bribe the person who is constructing a block. They need to send money to Coinbase via either via gas fees or via in a transaction they do you know Coinbase transfer and so they don't know who that who's that's going to be if you have a market for block builders. So you might submit your transaction out to a dozen block builders and they're all competing with each other to try to build the most profitable block for the block for the execution client or whoever slot is up right. But they want the people who are spending transactions to pay them because they're the ones who are need to be incentivized to sort their transactions appropriately. And so we it's possible to you know have other ways of paying those block builders to just you lose transparency if you move it off to another layer. Can the consensus engine validate the Coinbase as part of the block header right. Yeah, so it seems like they could. They could validate it but I think probably the more important question is what are they going to do about it. They've got two choices they can either give up all their rewards ditch the block or they can accept it and publish it and get you know the rewards on the beacon chain. Yeah. And go up to Coinbase. They're going to accept it. They may publish MPP load as well. Like the block with MPP load. Yeah, I guess that's right. I see the markets where the consensus engine actually speaks to multiple execution engines and then allows them to select Coinbase and may have its own execution engine that it relies on as a last resort one where it knows that the Coinbase will be agreed on. So, and then you can select of which one is the most favorable for the validator. I think that's conflating this like block builder separation from like the validator actually executing things which in a world in which you have a proof of custody on the execution layer the validator is going to have to have to execute things. And so the fact that you got if and how you got a block from somebody and how it's paid for is kind of an independent thing and I don't think that we should have this like superposition where you're asking. We don't know if the Coinbase is going to be set. Even though you're going to study to execute but Are we are we are we assuming the consensus engine and the execution engine are trusted relationship. I'm arguing that that will certainly be the case because it is a security flaw for it not to be. And there there's going to be a push in R&D to make that the case over time that if you're running if you're if you're running a beacon node and validator you have to actually literally do the execution. Even if you outsourced block production or block building. In that case I feel like we could just say that the Coinbase is a recommendation or it's like a just a place to put the Coinbase and then it's up to individual operators to decide do I want my validator node to decide the Coinbase or do I want the execution engine to decide the Coinbase. And so this is basically a way to facilitate facilitate that communication but we don't have to like enforce that in any way we can just say you know it it's up to individual operators decide which side. Decide the final Coinbase here's a mode for communicating that information between the two in case you decide you want to go with the model where your consensus engine is the one that makes the suggestion. Right I think that's reasonable. There are options here and there. And it means that we like for the standard it would mean that we will use like not must set the Coinbase working but shoot or like the note with the notion that it may be overriding. I don't know why the execution client. And Tomas the what I'm arguing is that your execution engine might ask many market sources for a valuable payload. But then your execution engine ultimately is going to run it and it needs to be configured to decide if it was happy with the setting of the Coinbase or not. Rather than the consensus layer talking to 10 different execution engines. You know I think you have consensus layer execution engine and then a market for execution engine and those are three different things rather than conflating execution engines as the market providers. Oh yeah I just use it as a shortcut so you can have an execution engine which is like an aggregator of execution engines but in the end, architecturally it will still be talking to the consensus layer and if our consensus layer doesn't start however if you remove this ability to upgrade the Coinbase, then the aggregating execution engine cannot really rely on multiple ones. It can cannot even allow them to, to act independently upfront and try to construct different blocks. So actually, what's cool about it is it's a bit more friendly for multiple solo execution engine validators runners for solo validators might be harder to extract the MVV. So to create very efficient execution engines but they always want to make sure that they don't publish incorrect ones. So they'll run the execution engine that will verify everything and validate. So it's called low value one. But for the actual block construction very often they'll just redirect it to someone who can, who can do that better, because they can find the MVV. Like I said solo validators will have really really limited ability to extract the MVV, which we see may end up being like 70% additional revenue. So, if we don't allow this Coinbase to be overwritten and obviously the solo validators in a way that they for sure will get the transaction fees. But on the other hand the block executors like the block builders may be less likely to provide any significant value because the base fee after the burning the Coinbase from execution fees is not that relevant. And it limits slightly the market, makes it much more rigid. Is Coinbase overwritten? Go on. It might be healthier to allow this to be overwritten because it creates like this multiple execution engines that compete but also validate each other and creates a bit more healthy market. So if you have liquid staking, you can have V validators, small MVV runners and all like talking to different validators saying this is what I can propose you but I assume that you're running something to verify if I'm running the correct thing because otherwise he might be not voted on, not attested. And this would be a big loss for you. I'll stop here on the Coinbase discussion. So do we have like a legit case when the Coinbase might be overwritten by the execution client or not? I'm not seeing it yet because I think that the way that MVV is extracted and an open market doesn't need to override Coinbase but I'm also probably speaking beyond my understanding at that point and I don't know if we're going to solve that at this call. Anyway, the proof of custody takes out most of the cases I can imagine that being useful. I think there's still lots of cases but I agree that we should probably move this to Discord. Yep. Okay, so if we have the Coinbase here, it implies that it will need to be like it will need to be added to the validator client API as well. So this is considered for consistent client employers as well. So let's move forward. We have these two methods. Really quick, sorry. Michael, did we decide, before we got distracted with the Coinbase stuff, did we decide on switching that first method up to a prepare and get? So, because that's the engine we'll call prepare and then, okay, sorry, I missed that. I think we have a rough consensus around this, unless anyone is opposed to that. Okay, so yeah, execute payload and consensus validator, which might have an alternative name. I personally don't like this much. This one much. So, yeah. What's here, what's about this methods? Obviously, we have the payload to execute and verify by the execution client. And we have a beacon block to be verified by the picking note by the consensus client. And it doesn't make any sense. The payload, even if it's a valid one, but the consensus, the beacon block is not valid. This payload might must be discarded. This is why the second method appears here, like the other option would be by sending the, by calling the execute payloads only after the consensus client has validated the beacon block. And the proof that it's valid. But it restricts the parallel, parallelized ability that we want to leverage the, like, meaning the parallel processing of the execution payload and the beacon block to save us some time. Hence, we need these two methods and yeah. There is like the block processing flow, the sequence diagrams that illustrates how two different cases of how these two methods are combined. It also implies the cash in on the execution client side which is mentioned like, like in the bottom of this document. Like the execution payload will have to cash. We'll have storage in some ways until the consensus validated message received and it can be easily discarded or persisted for like, I don't think any specific thing to mention here, instead of this, like, this consensus validated method maps on the consensus event from the CAP to the 675. And yeah, there is the number issue of it just returns valid or invalid I don't think I don't know if known is valuable here, probably not. The same here it's just propagates that this the execution payload with this block hash, the consensus rule set has been validated and it either well it or well it. Because that's the, these two methods, the meaning of these two methods. It does anybody have any questions here. And just just make sure I understand so the execute payload will come in when the consensus engine is saying hey here's a block. Yeah, let's go let's go here. Yes. Okay. When the block arrives, the execution was a scent and consensus client starts to validate the begin block in the meantime, when it's validated that sense consensus validated. So then the execution line response, and after all this done the block maybe persisted or just or should be discarded. And there's another case when the execution when the consensus related comes after the, the payload has been has been validated which is, I don't think it's like, potentially, like the freaking case. Probably when the execution cloud is completely empty no transactions that might happen. Anyway, it should be considered. What does an execution client need to do to recognize that the consensus validated will never come. Good question. You mean that when it should like drop the cash or. Yeah, at some point the so execution client receives executed payload, it starts validating a block. At some point I'm assuming it should throw that block away if it never receives a follow up consensus validated. Is that right, or should I hold on forever. Right, it may wait for finalized block event and drop the cash. Yeah, probably not, actually, if the, if the payload is behind the checkpoint that has been finalized and the block that has been finalized, it should drop, it should clear the cash so it should bring the cash in this case. Okay, so hold blocks until finalization and then once finalization occurs clear everything that's not in the finalized history but is prior to it in terms of slot numbers. Right and there is another possible case. One that could be cleared up if the consensus client was out. But yeah that's that's related to the recovery of the failures and there will be an explicit the place where the institution client understand that consensus client has been out. So if we follow the proposal in this document, and in this case, it can also release the cash, like, what was cash before doesn't matter anymore because consensus is right just started and will drop like new information like fresh information. So that's two possible cases for clashing this cash for dropping this case. Okay. Yeah, we have like 10 minutes. Yeah, the folk choice updated method, it unifies the two previous methods that we have on the folk choice state updates, which is finalized block and set hat. The reason behind this to behind this unification is that the folk choice information, namely the finalized block and the head must be updated. Atomically, must be applied atomically to the block store to avoid the corner case, which is rare but it's, it's, it can appear and it's legit. The corner case is about the situation when the, like the new finalized block will be on a different fork than the previous finalized block, and the message to finalize block arrives to the execution client and it should update this finalize finalize and it will update it but after this update, the head and the finalized checkpoint will be on different branches, which is, which is the lack of consistency between the two. And yeah, of course, set heads that will update the head to the new to this new fork will arrive like in a few milliseconds but anyway, there will be like a point of, there will be a short period of time when these two to two blocks to two things are inconsistent. So that's why it needs to be like the atomic update will be had and finalized walk. So, yeah, I just want to note that, given the discussions led by Donkrud, and maybe the definition of what is unsafe heads, the most eager head, then safe head which often would map to the same thing, maybe with a little bit of delay and then also finalize. I think that this would be the method where we'd actually want to insert that information. Those three things would always be on the same chain. But if we do want to expose that additional information to the execution engine for the user API is that we discussed the I think this is where we would insert it. We would not propagate this information to the execution client rather would requested like from the sources. This is just my opinion. We've been discussing to be in this group. Just the head block hash in this current draft, that would be equivalent to what we've been talking about as the unsafe head. Is that correct. Yes. Oh yeah. Yeah, yeah. You're saying you'd want to route as a proxy through to the beacon node rather than giving the information. Because we might want to, we might want to expose some other information from consensus layer to the user safety high in the future, so it might be more, it should be more flexible and if we propagate this, this all the information that the user needs to I think the user may request from the service see to the execution client, this kind of like every, every time we add something new to the from from the consensus where to the users API. We will have to update both the consensus clients and the execution. I'd argue not putting most of the consensus layer stuff into the user API, and for them to be separate and if you actually do want to leverage stuff from the beacon chain run web three dot beacon and ask for it directly. I think this is an exceptional case because this is your maps to them understanding the head of the execution chain, essentially, and that it's relatively limited and the information and that the beacon chain is good at I'm not really letting it but it might as well hand it off to the execution engine because it's relevant to it but I obviously think we could debate this. Yeah, yeah, yeah I see. I think the important part here is for backwards compatibility, unless we want the thing that everybody currently calls latest to be the unsafe head. So we need to make sure the execution engine is aware of what the safe head is. And I think it sounds like most of us agree that safe safe head is the reasonable replacement for latest. And so the execution engine needs to know that in order to not break everything. What we don't really agree on yet is whether the API proxies through the beacon chain, or if so that you can extend beacon chain functionality into the user API more easily or if this is kind of an isolated case and you just pass the additional information and don't proxy the API through. So even if we did proxy I think we still need to tell the execution engine what the safe head is. It's either the decision of proxy or not proxy the execution engine needs to know safe head so it can return something when users asked for latest. Marius safe head is not finalized that is under, you know, if you if you assume the network is synchronous and you see sufficient attestations come in, you can quickly know that it is very very unlikely to be reorg. That's kind of what we're calling safe here. And that but that you could also be in a position where the head of your for choices not had sufficient information come in and it's still the head but it's not you can't make like as concrete. You can't make as a probabilistically good of a decision. Yeah, so we, we currently have the model that everything not finalized is unsafe for us. So we're not like, I agree that it might be might be might be good to to publish a safe hat to the user for the user facing APIs but internally, we will not do anything with it. And I wouldn't suggest the only thing that I would do with it is potentially how you route it to the user APIs. I don't think that it has to do with how you handle your data model or anything like that. In the before context, this is when it when a user through the json RPC API talking to an execution clients using legacy API is only so they have not upgraded anything for the merge. They say, give me the latest block, the execution engine needs to return them something. And we need to decide is that the unsafe head is that the safe head or is that finalized safe head. I think is the best option here because it's very close to the tip, but it's also very safe. And whereas finalized is potentially pretty old, and we don't want to give the user a block that's you know, several minutes old, and unsafe is unnecessarily risky to the user. Okay, so yeah, get back to this. Like, I do see value in, if we say that we propagate all this information to the folk choice. Like these confirmations information to the execution client and will not get back to this pattern for any other data so that might be valuable, that might reasonable to do. And also this, yeah, this unification requires the corresponding update in the 3675 because there are two, like events that this map maps on so this, this is like we'll be done soon. Yep. Yeah, like four minutes. Yeah, and it was, it would be great to talk about the scene but I think we, we have not, we have not enough time for this. So I propose to reach out the gap team in this court to talk about the single find the requirements of the same process on the on the CPI. What do you think about making a call what is the better time for next call is like one week or two weeks from from from this. So we're currently working on our part of the spec of what we think about the sync. The quantities we can provide. And, and Felix is unfortunately not here today but he's currently writing down a new document with basically everything about the sync. So I think in one week. We should be finished if you all have like time there. I don't know. I would make a call like one week after if like using the same time swath. So, any other opinions of that. Oh, pressure for it. Okay. I mean we can use like half of all core devs or two thirds of all core devs to discuss the sake for the merge like we and it's basic it's not exactly one week and it's not exactly the same time but it might be a good enough place. If we can do this that the awesome. Sure. Yeah, it's hard to see what's higher priority. Yeah, so I guess, yeah, just in terms of next steps, nervous like as soon as the get team has like a sync right up just posted in the all core devs agenda, and we'll we'll make sure to cover that first on the call next week. And whatever time, whatever, whatever time periods during the all core devs will have to go through, we'll just try to do as much as we can. Like, I mean, just follow this discussion there. Okay, I'm stop sharing. Thanks everyone for coming. I was like expecting not reaching the end of the document today. Thank you so much. See you later. Thanks everyone.