 Hey, hello everybody Welcome to the merchant members call number four. You hear me. Yep. Yeah Yes, cool. It's been pretty silent here. Okay, so we have like regular updates for today and a few discussions on top of that First item in the agenda is the grannies and And we are currently running nocturne DevNet, which has started yesterday It reached finality and looks stable so far There have been a few edge cases Which we saw on this DevNet and also there is like an issue with deposits With in particular with if one deposit voting, but I guess we're near to solve this Issue and we will see deposits like eight teams are running their validators and several community members are doing this as well and They are like helping with Doing some testing on it depositing trying to break Trying to submit bad blocks and so forth. So that's great and Yeah, I have like a couple of questions regarding the nocturne DevNet We've been planning to test transaction propagation there Is Anybody from go Ethereum team on the call now So I believe it's a holiday in Germany. So most of the team will be offline Okay, probably Proto you you might know if This PR is about to get merged or already merged. I don't know which enables the transaction propagation. Oh Well, I think there is this one PR and by Gary that improves on some of the things, but I'm not sure about transaction propagation That does not all run for a few more days at least so we can try it later. Yeah, okay Yeah, make sense. Yeah, the other question was about like state sync, but Yeah, I guess that was most addressed to Go there and team again. So let's just skip it Yeah, if anybody wants to join nocturne you are Free to you're welcome to do this I'll just drop Yeah, you may reach out Proto or just Drop the message in the Ryan is in the score channel and request for some if you Get deposited. Yeah, but we need first two deposits to to be this deposit issue to the result So here is the relaying for nocturne Proto do you want to add anything about nocturne? Well about renaissance in general, maybe I think yeah, that's like the next yep Yeah, that's like the next start about talk about this then Yeah, yeah, so you may just start it right away, right? So with renaissance, I think that we should basically wrap up the hackathon kind of face and Think of the merge more like this thing that we are going to work towards with production and This basically means that we want to do the rebase Which I'd like to call it if you're playing with kid terminology. You have Altair in London first. This is this missing functionality. We just been developed in parallel But now it's time to try and like layer the merge work on top of these updates and Then implement the new API. So we move away from Jason RPC. I'd like to I'd like to move away from that it's up for discussion and then From there, we have a better chance of a client that's going to do the fork transitions well and We can write less codes that we have to throw away later so we can work on state sync any smart difficult problems and Make some progress there and just some some context on Consensus I don't on Altair is that we do have a pre-release another free release coming today and A target for a freeze on that a week from tomorrow so that After that we would begin to rebase the merge spec on Altair because we'll also be seeing spec conformance Altair releases soon after that on the client side Sounds sounds great. Um, so like the rough plan is to rip up renaissance, right and then Like while client implementers are focused on Altair and London We'll keep doing some spec and research work like figuring out the transition process We'll do some proof of concepts On top of the infrastructure that we get out of ranism Thanks a lot to Proto for doing a tremendous amount of work on it and Yeah, then we are Getting back after probably Altair and London is like nearer to get Finished we get back and like spawn another merge test net hopefully with state sync with the new Consensus API, which is going to be discussed as well and spec'd out during this period of time. So it's like a month or two So that's my understanding And I think yeah that this kind of plan makes a lot of sense to me Yeah, I think we'll also extend the consensus test vectors for the merge as well, which there's like a lot of Work in that direction right now in the spec repo and it'll certainly be ready for kind of the next wave of development Yeah, definitely Also, it's been planned to deploy the these roles dev net and to work on charging during ranism this work will Will keep going in post ranism So it's not like a pendant and Hopefully we'll like have yeah as I already said we have all resources like infrastructure log explorer scripts dockers Just to spawn dev nets and test nets easily anything else regarding ranism I want to echo what Mikhail said huge props to Proto for the in the effort and for all the contributors and stuff. It's Really awesome to see the dev net up. Yeah, we'll like really have Seven clients implemented the initial merge spec which is amazing result Which client is missing? Yeah, open theorem is missing. Yeah, and I guess to forget is also has also missed this one, but they can like Get chopped with the changes from go with here, but I don't know whether it's possible now The question from Micah. I don't know how to answer on it Will be open will open a room be able to make the merge If no one here knows the answer, that's fine. I'm just curious if someone had any clues I Think they're still thinking through it. I don't want to speak on their behalf Okay. Yeah Yeah, definitely probably just somebody heard something and which makes sense to share here, but anyway, if not Let's skip this Okay, so I guess that's all for an isn't and We are moving to research updates Um, yeah one update from my side I've been supposed to start work on the transition process, but unfortunately I had not Have enough time to do this to to to come to some, you know readable spec or this kind of stuff for analysis that That I was supposed to do that I was planned to do But yeah, I guess we'll start the next week I was actually a bit busy with the Ryan aism And other stuff Any other research updates and Nick I'll this is primarily to Change the transition portion to be a dynamic total difficulty based off of Fort Yeah, yeah, that's that's it. I was like going to analyze the Way how difficulty could be changed throughout the voting period and what value would make sense to How What would be the right way to extrapolate the total difficulty that we could expect? Gotcha. Yeah, you know when you open that up. Thank you for your hand Yeah, sure because yeah, it's reasonable to use the If one data voting for to get the block hash which we will use for extrapolation Because yeah, otherwise we would need to come to consensus on this block hash first, which is Which does not make much sense, but we'll see Yeah, yeah, I might get it to use the first if one data voting block after the beacon chain fork But as the function of total difficulty Rather than using the next L but we can chat about it. Yeah Okay, are the research updates we strolls maybe good say a few words I Got good feedback from the last call. Thanks for it and my son improvements Edit partial withdrawal section it looks viable, but it will be restricted to all the data space BLS withdrawal credentials So it's very limited for usage in shared pools I think we cannot do something on chain in the firm one visit But something like Shamir's secret with BLS Could work enough chain pools You could check an updated doc with robots withdrawal section and provide me Some feedback on it Thank you. Here's the link Will do. Thanks. Thanks, Dmitry. Anything else before we move on? Okay, cool Let's move to this back discussion and the first item is consensus API standard. I think it's a good time to open this kind of worms So we like and to start the discussion Yeah, I'd like to just share my very of the top of my head Opinion on that. So will I have chasing RPC? Yeah, this is the discussion about how the consensus API will be provided by execution engines which Underline protocol we will use for that And once we make a decision on this protocol, we are free to design the particular endpoints and move forward so like the What do we have so far is the Jason RPC API Which most of people here are familiar with I guess And the other one is the e2 API that we can know the API So Jason RPC is like Based on the yes HDP as well, but yeah, it's you API is the rest API and my opinion is that The In general I lean I'm leaning towards the rest API It's like convenient. It has a lot of tools that can be secured and so forth But The argument for using the Jason RPC is that it's been already implemented in all of the ethon clients and We'll just need to reuse the code But one thing that we should keep in mind here is that this new API will need to be exposed on a separate port And not exposed to the public For security reasons Because this is the tight relationship between the consensus layer and execution layer So I think that implementing this from scratch with like rest approach makes sense from this point as well to avoid like bugs and in the implementation that will relate that will abuse the It will like damage the security anyhow So Yeah, let's just discuss it any any opinions that we should Use Jason RPC for this consensus API Lucas I have a question. Can you provide me with some More concrete examples of what we will gain if you will re implement it because just saying that there's two link that's not really Doesn't really clear much so if We should focus on what it will bring us and then we will can decide not before Yeah, that's fair. Good day. I Am actually I'm gonna pull up an old comment from Peter and Martin when we were debating as to the API between beacon and validator and Peter jumped in and gave a long Argument for using wrestling to be instead of Jason RPC and regretted the choice Of Jason RPC on current each one client and here it is I won't go through it all here, but if you're interested take a look. I think that's so relevant when making these six decisions. Obviously, I think Certainly what look has kind of implied is that? the One of the main drawbacks of changing this kind of thing is adding support for another API type on On the clients that already served Jason You said just just now you said restful HTTP. Do you mean that or is that a misspeak? Do you are we talking specifically about you should be which means web sockets are out or is rest over web socket considered still part and rests in this case I'm still on the table. Well, I mean rest is a design pattern, right and Right yeah transfer You said restful HTTP like so the reason I'm asking is because like I'm a big fan of rest But I'm also a big fan of web sockets and especially for like the long what's essentially going to be a long-lived connection like this Web sockets make more sense in my opinion And so rest over web socket I would be like a huge advocate for where I'd be much weaker advocate for Doing all the work to do rest over HTTP or doing both, you know rest over HP or web socket like we do with Jason RPC It would be fine, too. I will not make a response because I don't have enough of a pen in here Okay Mike could you elaborate on why web sockets didn't make much sense in this context Because the correct if I'm wrong and maybe I am here But the there will be a reasonable amount of traffic over this channel and we want to make sure that we're not Getting inundated by just HP overhead with web socket You spin up the web socket once at the beginning of the connection And you just leave it open and the overhead per message is very very low compared to HP or is with HTTP Oftentimes you end up with more overhead from HP headers than you do for the actual payload Yeah, we have a lot of people that are clients that are currently working I Believe you might be overestimating the amount of communication and overhead there Not from the Requested it to be sent and the payloads there are actually probably pretty small Feel like the headers should be large compared to like say So what I would actually vote for HDB because like it's just so convenient that you can do things like just Any other API tends to like have like much stronger blockers if you just want to experiment it do some quick stuff Then crowd you meant your On the like rest side. I mean you're in support of rest because Jason RPC is also Yeah If we want to do rest and we want to do Web sockets together then we can you have to emulate some parts of the rest in the web sockets like things like getting For things from the path we need to somehow code it and code it etc Because rest is was designed mostly as an HTTP API if I'm correct Yes, and my concern Yeah, I'm going to back down web sockets if there's if there's not much throughput I am I am not surprised if I'm overestimating the volume of traffic here. Do we have an idea on the So one of the arguments for Jason RPC is that it allows clients to reuse code because they Specifically will need to be opening a server on a different ports Does that change how much code they're able to reuse like do we know our clients designed in a way that they can very Like it's easier to spend up another copy of the same type of server again within their client Or would it be just as easy to just spin up a different type of server? Paul. Oh, yeah, so I wasn't trying to respond to that question. I put my hand up before I Could try and respond to that though. I'd guess that Making a HTTP API is probably trivial for I'd say all languages involved So I wouldn't I wouldn't imagine it would be that much that would work But I'll wait to be able to answer before I change to a different topic So if I can answer my question, it's very easy for us in other mind to spin up a second port We're already doing it for web sockets communication. So just add another one Would it be significantly easier for another mind to spin up another Json RPC server or Just as easy to spin up a rest server With another mind, it will be a bit easier to spin up just the second Second port I don't think that doing a rest Would be that hard But in the rest you have you should for example correctly use HTTP codes for communication. Yeah, that's part of rest So a little bit a little bit like carefully designing the responses, etc. Error responses which are more or less Just defined in the Json RPC already. Okay, Paul, do you want to Share any other Yeah, so what my understanding is that one of the things we have yet to figure out with the the communications between Consensus and execution clients is how do we deal with them syncing between each other? For instance, if you like if your consensus client is long running and then you say wipe the DB of your execution client How do we get them to sync with each other again? Has this has this been like fleshed out somewhere because it seems like it might be And Important factors in determining the the communications that we use and I'm especially Interested because I know that sometimes rest can be I think it's great for this reason, but it can be Restrictive at times and I'm I need to think through but I'm not sure if we'd start to run into problems for a while through but I'm not sure if we'd start to run into problems with rest if we're trying to Sync these two processes between each other whether we we'd start inferring state between requests and break rest So I guess my primary question is have we looked at how they're gonna sync together and I've just missed that or I Don't think that rest will give much more More overhead in this case in case of syncing and then the regular JSMRC But I know it's just my opinion Is the connection between the two stateful at all or like could you have three execution clients on the hypothetically have three execution clients on the back end talking to one Consensus client and everything be fine or is there like an assumed state? It's stateful because like the it depends of course on the design of the execution Client of the execution engine like if we have like three servers that That are in front of the execution engine core which processes blocks This is one design and if you like have the model with architecture as we have it today So yeah, there is a one-to-one relationship or one or many to one many big and notes to one execution engine But not the opposite, but how is it? It doesn't need to be stateful I mean the only state would be whether the execution node has actually received the block, right? But once it has it like I Think that should be the only state execution engines rely on a notion of the current head for a lot of things That could be changed And you could just like kind of have references more dynamic representation of the block tree and multiple different potential heads, but they today rely on like When you set the head, there's certain things that are like optimized in terms of what state is available And which pending blocks are being created and that kind of stuff Is it possible to design in a way really nice if this could be a stateful or stateless connection? like can we make it so when the Consensus clients makes a request of the execution engine It gives the execution engine at that point in time all the state it needs to answer that correctly. I mean you certainly can and I think the inserting blocks has the state that it needs, you know, I mean you either have the previous block or not and A symbol block I think right now tells us the head you want to assemble on and so the information again is there but There's still like certainly Likely some optimizations and reuse of how these things work today. That's that had becomes really useful That had but you can you can certainly kind of design an execution engine that doesn't really care about that head and The other methods I think would work fine But it doesn't reuse existing code quite the same Okay, so if if a Consensus client asks a execution client to build me a block It will tell it enough information that it will either get a correct block or it will get back an error saying I can't build that because I don't know about this head you're talking about But it won't give back an incorrect block, right? It has enough information to make sure that to validate the built to give them back the right block Yeah, right and when you were like Saying that like suppose there are three execution engines and there is one consensus client in front of them and Yet in order to get to stay in sync in order to maintain the state and to maintain the full state and the execution chain this Consensus client will have to feed all three with new blocks and with any other information required to Get sync and stay synced That just would have to essentially whatever you have routing there would have to do a broadcast. So it receives a You know new block new set head from the consensus client And then it would broadcast that down to all of its connected execution clients hypothetically so that way they all Update themselves, right? Yeah, like set had a new block I think one of the things Paul is concerned about is for example if Consensus says insert block and the execution engine doesn't actually have the parent in there You know, what is the communication protocol to recover from that? Does the consensus just walk backwards until? Parents until the execution engine has what it's supposed to have and then Inserts from there or is there some other more dynamic recovery? That I don't think we've quite worked and that's the kind of like how are these two things in sync You know what happens if one? Shut down and then you it comes back up and doesn't have a database like that kind of stuff We haven't worked through that and I think to answer your question Paul. We haven't worked through it Yeah, actually before the assemble block with the some parent hash is sent We have this new block With the this Parent hash, right? So if it's it was not the case then Yeah, there is inconsistency between the big and chain and execution chain if we are talking about like one consensus engine and One execution engine if this is like the infrastructure where you have like multiple beacon beacon clients beacon chain clients using like a few execution engines Or something like that. Yeah, probably that could be the case. So Yeah, it seems to me that Like a test would be if you have say one Consensus client and then you had a proxy and then say three Execution clients behind that proxy That wouldn't really make sense because you like, you know, the that walking back process Procedure that any of us talking about would just doesn't doesn't make sense if you start bouncing off random Executions clients based on the proxy. So it gives me the idea that Maybe rest isn't the best thing that we're chasing for I mean naturally I would prefer to rest just because it's a it's a like over a chase on RPC Because I prefer that but this just kind of feels to me a little bit more like an RPC like a one-to-one RPC I get it. What I don't really like about Jason RPC is that it's custom error codes custom error messages, but as it's been said, we're all familiar with that and One thing where it's considering here is that all if two clients currently Support Jason RPC and have a Jason RPC client to fetch deposits and get this one data for the reward so We don't have an overhead and like implementing this Jason RPC client either if the Very good Thanks. I was just gonna say that The current design is having, you know, these two separate processes consensus and execution processes I think is the way that we're going for now just because it kind of makes sense But a world where they're wrapped in the one process Not necessarily maintained by the same team, but they present as a single binary and seems appealing to me And perhaps using something like Jason RPC is nice because we could start to use like a IPC Socket as a comms transport between them and if we're doing something like instead of having two processes We're importing them as a binary then that works very well for them to talk between each other Whereas like having a HTTP client HTTP server between these two like inside the same processes is a little odd as well By my binary you mean the binary protocol So I mean binary is in like, um, like, you know something X in Windows kind of thing correct me if I'm wrong, but Let's say the because those clients says hey assemble me a block with this parent an execution client doesn't have that parent Is it correct that the execution client then goes to its own gossip network to get that block? It doesn't talk back to the those clients that correct. It's still one-way communication Or request response rather there are two options to respond back with error and which will be like the like database and consistency error because these two parts are actually one client and Their data should be consistent The other option is yes to try to go to and download and pull this walk from not from gossip, but from the walk Yeah from Eth Protocol network protocol and to get those blocks But yeah, I guess what was I kept in mind is what like it's just respond With error so there is no such parent block So I thought we're not responding to The either quest anymore. Did we cut that part of the gossip or cut that part of the protocol out? Sorry, could you repeat the question? Can you still request the block? Through either did we cut that out as part of the networking to make the execution engine kind of more silent Uh, no, we don't cut this out. We just we cut out the log gossip Yeah, it's not cut out because of how initial sync especially state think might be performed It can still be utilized and There's certainly an interesting design decision here If there is some sort of like if the execution engine detect some sort of inconsistency because it's being Requestor being made for things that doesn't know about it It can use that end point to go and fill You know the unknown things until it could use the p2p network to Get back into consistency with the consensus node, which is Interesting it probably works out of the box, but it's also kind of a strange design decision I like it because it makes it so the So basically the the flow here would be the consensus engine says hey do this thing for me execution engine says I can't do that and then it basically goes and fixes itself on its own Like it's a essentially a self-healing system And so if you had like an edge server proxy server between the two you could notice that oh We got a consistency area take that execution engine out of rotation because it's down for a bit And then we'll try it again later and meanwhile I can then fall back to a backup execution engine or just It's got three rotation now So I got two rotation or something and so the whole system ends up being fairly Self-healing if the execution engines can heal themselves when they get a request that indicates they're out of sync right, I Kind of like it as well. I mean there's It also just gets to leverage Exactly what the execution engine does today to heal itself if it finds Learns about things it doesn't know about But it can still especially on the one-to-one communication. It kind of kind of complicates things like if you get Uh, if you talk to an execution engine locally and it it doesn't know something Then you just kind of sit there and wait and hope that it knows about it in the future Because it presumably helps self-healing itself and the consensus can't really be as proactive that it as it might want to be I see because it would just basically have to just pull it since it's a one-way communication channel until it gets back a success right Yeah, the case we're discussing now is um Yeah if there is no like Like if consensus client asks for assembling a block on top some of some parents That is not known for That is not like presented in the execution chain. It would mean that um the consensus client before while Important a parent of this block Uh got failed or something bad happened Because if the execution engine in response was like this block is valid then we assume that it's been inserted in the in its database and it's changed so I would say that this is like like some weird and odd Case rather than something Yeah, that should be usual I think the reason I keep harping on this um One to many is because I think pragmatic. I suspect pragmatically what we're going to see is that we're going to see a bunch of people running validator clients and very few and leaning on third-party providers for the execution client because the execution client is so Expensive to run like I run a few and they are not cheap and they're not easy Like it's you you basically have to run Operate an operation center to run an eth1 client or an execution client right now And that isn't going to change in the media future like we're working on that But that's a ways off and so I think realistically we probably will see people going to places like infira and quicknode and all these And alchemy for their execution client and they run their own Um, can does client and in that scenario we do have exactly this where you've got You you hit some proxy server and the proxy server is going to route you to one of a hundred execution clients And so I suspect that's going to be at least first of the time being that's going to be the common scenario Not the uncommon one like we would like Which is unfortunate, but I think the real I suspect reality. I think an operation operation center has been exaggerated I agree. It's like it's it's it's a big problem. It's like very 10x more than in two times But what does the comment like from the research perspective? We are thinking About how to change that like using a full capacity where we do Make it necessary for people to run their own Execution like we want to make it really hard to do exactly what this part like the outsourced Like we will actively break that My opinion that uh, we at Nimbus have users that run both get a Nimbus from a Raspberry Pi I've heard rumor of such such people I don't know how they do it I've got like a server that I rent that I struggle to keep on Yeah, we we struggle to keep gas up on like a you know box with eight gig in and four cores um And then sometimes it runs fine And get so perhaps Just real quick perhaps the the the right question is is we should if everybody is in agreement that we are going to brick people that are not running both execution and Um, because as a client then yes, I think we can kind of design towards the one-to-one connection and focus on that Making that good and smooth Um, if we think that at least for for the time being there We we're going to allow for and enable people to do like use alchemy and inshira Then I think we should design for that because I do think that's going to be more common So maybe the first question is is which one are we actually designing for? Well, there's two types of There's validators Sorry i'm getting a lot of feedback there's validators, which uh, there's an explicit desire to Put a proof custody on execution so that it's not outsourceable, but for users in general there There's all sorts of design considerations, you know running a beacon chain and and getting proofs about state uh execution layer state or running a light beacon chain and not running execution at all and or You know amongst different many different versions of that. So there's not just the validator that we're designing for here not sure also maybe to add maybe to add um that's Also the one-to-one design might exclude things like secret shared validators and stuff like that We should also consider that because it might make sense for example to run a secret shared validator. That's where you have For separate beacon nodes, but only one execution nodes designs like that May be possible. So I wouldn't I wouldn't necessarily say that That's because we don't want impure. That means we should optimize for one-to-one Yeah, is that other? Yeah, sorry go ahead I was going to say if if we do look at um one-to-many do we have this so the idea that you know if a consensus um client requests a block From the execution client excuse me. I don't know the parent and it goes and tries to find the block itself Don't we have the problem that the execution client can't rely on blocks being Valid unless it can verify them with a consensus client um, right And you might say that okay, so then if it gets a request for a block then it should assume that it's canonical But then that kind of breaks when you get to a furor when you you you'll just have people spray anything at it Wait, wait, Validity is independent of consensus Validity is independent of consensus Right so You could anyone could tell it to follow An execution chain and that would be and it could be valid with respect to execution parameters, you know the eb and transition But the consensus any any consensus kind of Outer layer on top of that is not going to pick that chain if there weren't a valid set of transactions. I mean a valid set of Wouldn't that be a DOS though? I know I could just fill out Absolutely, if you open it up if you open it up to like if you if your opens up to anyone being able to trigger Whatever, I mean, I think that's certainly a DOS factor Yeah, and that's what I was going to say That it must be a consensus block first It can't self heal from just getting the execution block hash It can have a trusted relationship. I mean I I the idea of consensus running. I'm sorry if in pure running execution layer clients and not having any view into like the consensus layer I mean, I think that they You would need to like they would need to design their own trust model here On on these endpoints. I don't think that you can open up any of this stuff to arbitrary requests Okay so getting back to jsnarpc versus arrest api anything any arguments against or against jsnarpc Does anybody have anything to add here I'm My feeling is that where We're not quite at that Question yet that we still don't understand the the nature of the the communications between the two the two items and that I think when amika went down the roll of like talking about one to one or one to many That's probably what we want to what we need to be thinking about in abstract terms before we start to pick protocols But perhaps I'm missing something Yeah, I I would tend to agree that these one to one one to many many many many type of questions and the Staying in sync question need to be poked on at least for a week or two To even see if like the current communication protocol is sufficient and then if if it is or is not That that might tell us what we want to do here. I mean my my guess I have a Flight preference for wrestler stupid, but based off of what I currently know, but I think that there's unknown if I could if I could choose how it ends up I'd say Yeah, I was just gonna say if I if I was to choose how it ends up I'd love to see it to be like one to many restful htp htp because I think that's super flexible That'd be that'd be nice to aim for I reckon Proto you had um a slight preference to wrestle htp because of the authentication model. Is that something you want to share before we move on? Well, so I think separation is really important of the two different RPCs And this is for security as well as just stability. I think in the current design There's a lot of assumptions on the even connection the existing even connection For the deposit data fetching and sync and In this does not it's really been a struggle mostly to work around these assumptions to make it stable And I think just starting with a fresh connection that's focused on consensus isolated and secured It's just a much much better approach I understand correctly. Are you basically arguing that by using a different protocol? We kind of guarantee that we're not going to have clients with bugs that cause a bleed between the two Yeah, so within the jason RPC Like it the protocol itself is fine It's the client that exposes it and the client that fetches from it That have these existing assumptions around it for deposit data I think and at the same time we mix it up with the previous existing code and I think that is just like You just increase the the surface for bugs in the consensus API So you're kind of making an argument for dedicated deposit endpoints on execution clients Honestly, I think that's a better idea as well We have seen various bugs in the receipt logs and whatnot and if they break something that's critical and then Yeah, like it would have been I wouldn't mind some shepherds. It's cool Lucas, do you want to add something? No, so a little bit on the side because we heard some talk about one too many clients Connections is that right? Oh, it's okay We were even thinking about making it many too many so a Arbitrary number of client could talk to one Ethereum to Ethereum one node and vice versa. So We were also what this would be This would need some additional work on our side to enable that and we would have to differentiate the clients and keep some state for them some block tree info about the current state and some transaction pool Need needed to be separate But the rest could be probably shared that might be a good way to like reduce resource usage because if each Ethereum to Validator node would require an Ethereum one node that might be quite a big Requirement if we can share each Ethereum one node for like 10 or 100 Ethereum two nodes that would make it a less of a pain and easier for like providers of that of that Infrastructure for example Okay, unless there are any other arguments. I think we should wrap up and Yeah, we're already using json or PC for a consensus api and we'll keep doing this for the proof of concept and for development phase um Yeah, that been said it's yet to figure out What are the requirements for this communication protocol with regard to the sync process So we'll see we'll see more inputs to this question and Get back to it later on Okay Yeah, anything else here. Let's move to the next one. Okay spec discussions execution um First of all, uh, we met some interesting edge case with catalyst On the nocturne.net um, there was a kind of uh, yeah Okay, so the the the case is the following like Suppose we have a block And we have like two children of this block And these both children Have the same state route Which is legal because we don't have minor rewards anymore and these two blocks can have Empty transaction lists and what catalyst does it rejects the second block it observes with the error like Yeah, this and this Mechanism is a part of the mechanism that protects from state mirroring attacks um I guess nobody from go theorem here um to discuss this particular behavior um, but yeah, probably Proto can add anything here So the state mirroring attack really only applies to like long range attacks. I think um So beyond like more than 100 or so blocks um In the test net when they're done with many transactions It's very common to have the same state routes and rewards are issued in the consensus protocol and not in the execution protocol So you'll end up with the exact same state and Maybe we should redesign this so that we have a unique state routes per block And this would be changed on the Ethereum one side Yeah, I have like a related question. Does any Any other Econ clients has problem with the Or has issues or has some protection against having this kind of Two blocks with the same state route Is this a problem There's a warning coming from guest, but is it actually Hurt the functionality currently So when it inserts a sidechain and reorganizes The blocks Then it will upwards Yeah, just reject the block, right? So at least for basic we need to It's on my to-do list to read martin's write up and see how it affects us So we don't know. I don't know what the behavior is right now. Okay I mean essentially if the The consensus side tries to insert what The execution sees as a block. It already has and it just returns and say it returned and said like, okay I have that already Then and you point then you did a set head assuming that now that it just Would there be much issue here? because essentially I I suppose Two different we talked about this a little bit, but like two different deacon chain forks could point to the same underlying execution layer chain and that Both of them essentially if you reorg from this to that You'd say set head and it pointed to the same place and the execution layer probably doesn't care I think there's probably just some like minor things to work through here, but I don't I don't suspect that we would need to Enforce that every beacon chain execution layer Root has is unique across forks Um, obviously you could do that by like inserting So the problem why we protect against this attack is to optimize the way we Sync these execution payloads in the if you're in one client So if you can trust the state route then you can basically Skip ahead And when there's this kind of long range mirror attack And I'm not familiar with the details Then you may skip this validation And so even though the stage route is the same the block contents could be different and then you could get into this dangerous kind of sync scenario Yeah, and what do you mean by a lot of content being different? So if you optimize to trust the state route Then you can get this kind of problem where if you reorganize and then accept the state route because it's the same then Your block contents may not be fertilized correctly I have a question. We are talking about the mirrored state attacks. Yeah Yes So I think it's related to pruning right And we will prune on when we were on the finalize Block from ethereum to consensus In ethereum one execution engine because before that we cannot really prune. Yeah Yeah, I agree because with your With your on that also, I think that This is not related to all potential Prune state try pruning implementations. This is I might be wrong here Um, it's probably better to ask go ethereum, but from what I understand um, it's related to how the How geth Does state try pruning and yeah this kind of attack is specific to geth And to this particular pruning algorithm when when they have like this Sidechain which is not executed before it's reached a greater total difficulty than the canonical chain And then they switch to the sidechain and if there is like a gap Um So they can't retrieve the state because it's been pruned They trust Like this portion of chain. They can't execute that's And that's where the state mirroring appears Yeah, so is it based on the reorganizations? Um, yeah and reorganizations and because of pruning Yeah, so if we don't prune as we think before we the block gets finalized, we don't have this issue really Yeah, right. So I don't think we need to Make state root unique for each block in the context of executing and the context of Execution on the big and chain Mm-hmm But this this is just this like edge case appeared in the nocturne dev.net as a signal to the to consider um, this state root being not unique For each block and yeah, keep in mind for further um design or Like testing and so forth So in never mind we support side chains when we expect sometimes the state's root not to be Different because of low volume of traffic there. So We are working fine with that generally It's um, rya. You said that there was a write-up from martin on this Yeah, he posted recently in the um in the private key base that if one devs have Um, yeah, I gotta read that I think it's ultimately just a link to a To a github jest. Um, yeah Okay, um, so anything else on the execution side anything that If one implementers would like to ask or discuss here So the next step is consensus discussions. I don't think that's much There is much to discuss here but just in case Does anybody want to discuss anything or ask a question? Okay, cool So let's go to the open discussions and there has been a proposal to move this goal Do they like do the same day? When it do implementer skull is happening just move just make it but For the same time slot. So it will be like one hour of Um, merge implementer skull and then the to implementer skull Just wondering what do people think about it? and probably Paul raised this question and probably Paul has Can would like to share his opinion Yeah, sure. Thanks for raising it. Okay. Um, yeah, it was just my suggestion Um, these these this call is 11 p.m. Now. Um, and then midnight when they're like savings is on um They're they're pretty disruptive. Um to sleep schedules. So Stacking them together. Um is appealing to me I'm not sure if anyone has reasons why that's not a good idea perhaps I think also meeting fragmentation is also something that I'm interested in. I like to pack my meetings together Yeah Yeah, I just wonder if we can see it for two and a half hours Like if we have a lot to discuss with the merge and with regard to other e2 stuff I'm looking for you know I the the other one the one problem I see is like Call exhaustion hour two, uh, but the e2 calls are like fairly light usually I think that might change a little bit as we're moving towards Altair production, uh, but Those calls are often even only 30 or 40 minutes Any objections from trying it out and see how it goes? Um, okay, so Let's just try, um So we have like two planters to go next week I guess we might try the new time for the merge call be Like three weeks after today, right? I think that's good a little bit extra time now that grand doesn't die down and there's a lot of like Work on altair in london that'll happen. That's that's a fine break Yep, thanks for uh the kind consideration everyone it probably means more than you think I have a guest room. You can just move in in colorado Time zones are pretty good over here. Yeah, I'll ask my government if I'm allowed to leave Yeah, I guess you probably can't even get into this country I've lived in the u.s. Don't believe anyone who tells you their time zones or anything near same Oh, it's great. I wake I wake I have called in six in the morning. It's wonderful Yeah, they're not sometimes they're seven Okay, any closing remarks Thanks everyone. Thanks for Uh, this great month, uh of ryanism work that I've been I'm lucky to be a part of See you in three weeks Thank you. Thanks everyone. Thanks a girl. Bye guys. Bye. Bye