 Okay, so let's get started to welcome everyone to the Merge Implementary School number seven. First, congrats on the girly fork. This is one, this is yet one more step towards the merge, which is great. Okay, so we have not that tight agenda today. So it's going to be relatively fast. We'll go through a few updates and all discuss plans for Q3 then finish with some random discussions or probably spec discussions. So I'll start with implementation updates. I'll start from myself as usual. So I've been recently working on the prototype of the transition process. The spec of it has been merged, the part of the spec of it has been merged like a couple of weeks ago. So it's been implemented in Techu. I played with it in the local network. A few things I would like to share about this testing. I've tried it like in a positive test scenario and a negative test scenario where the block proposer tried to produce blocks before the computed transition total difficulty has been reached. So it went well. But obviously, we need more thorough testing with more mass on the network side. Like withholding the proof of work blocks by some part of the nodes of the network and then releasing them and so forth. Also one thing to bear in mind here is that I used local miner. So it resulted like in high fluctuations of block time intervals. And one of the goals of the total difficulty computation is the predictable merge time. But yeah, that wasn't like checked well in locally because of fluctuations. So it should be checked with some real miner which can produce more hash power. So that's the update from my side. But anyway, the prototype showed that the algorithm in general works, which is great. Any questions here? Very nice. How do you think is best to test some of these more complex scenarios? For example, like a partition in the network for two epochs after hitting the transition difficulty and things like that. Do we have anything in our toolkit to test that kind of stuff or do we need to build out some questions? That's a good question. If I was thinking about simulation, like simulation in the network stack, if we want some predictable scenario and so we can have some parameters of the mass that we want to have on the network layer. No, I don't think we have any tools for that. So we need to read on the consensus side. We could write fork choice stuff that essentially like a chain being built and then another chain is revealed with different difficulty and stuff. So there's a little bit we can do there in kind of an isolated fashion that we should. But yeah, simulation probably makes sense for... Yeah, right. Yeah, probably I've been correct saying this simulation of the network part. Yeah, what you have just said is what I was thinking about just to stop the network layer with some predictable network layer which can be managed by some process wire messages according to some time intervals or something like that. Yeah, so it needs to be done. Also about these fluctuations, I don't know. I've been watching like, do we have like the stable time intervals on the, I don't know, on the Robston, for example, the block time intervals? What about the block time interval? Are they like stable on the Robston? I mean, is it like really, I mean, more like stable in terms of the difference between the mean time and... My understanding is it kind of depends on the day and who's mining on it. But I would suppose more stable than what you were doing locally. Yeah, we'll see. Okay, so I guess we can move on. Can we... Given that CLEAK still uses total difficulty, can this what you've written the anchored on GORLY or CLEAK network relatively easily? Interesting thought. Because if so, that would give you good block time. Right. It probably is worth considering if that can be ported pretty easily. I think because it uses total difficulty it should be able to be, just because we do have some test maps we will want to work off of GORLY. The one catch with predictability on GORLY is that an in-turn block gets a difficulty of two and an out-of-turn one gets a difficulty of one. There's a lot of out-of-turn blocks, so you're kind of having or doubling your difficulty a lot. Okay, gotcha. Yeah, thanks Thomas on the information on their Robston, let's see. So it was very in mind and get back to this question a little bit later on how to check the predictability. And yeah, this historical data on difficulties could be pretty valuable. Cool. Okay. Any other implementation updates that anyone wants to share anything? Yeah, I think primarily London and Altair. Yeah, yeah. Makes sense. London and Altair. Okay. Let's go to research updates. A couple of PRs that have been announced on the previous call, which are the cleanups in the big constraints back by Justin and Ed and Randall failed to the execution payload. They have been merged. So cool. Also, I've been like a bit looking into the current implementation of consensus. Jason, RPC, Mishir, the doc. This is rather a problem statement. Then the particular proposal on how this consensus API should look like. I've got a 403 forbidden on that doc if you can open it up. Oh, sorry. Really? Let me open them up right now. Yeah. Now it should work. Yeah, I'm in. So let me give you a bit of a context on that. We have the consensus Jason RPC implementation, which we used for Arianism and it worked well for the purpose of Arianism and it could probably work well in production. But some of us were suspicious about that. That is production-ready thing. And here is a few arguments contributing to this. Okay. So just, yeah. The main question here is if we go with the Jason RPC-based consensus API, which has some restrictions and which lays some restrictions on the use cases, we might probably want to replace it at some point in time in the future. So the main question I would ask and the main question that this document states is that whether we are ready to develop something, some new communication protocol before the merge or should we take an easy path right now and think about it and do it later? So that's the main question. I can go through the problems just briefly that I have found in the current implementation and the design. And before that I would say whatever does actually get implemented is probably likely to be very sticky in terms of difficult to replace once it's in production. So I would, I haven't read through this document, but if there are actually problems I suggest we fix them soon. Micah has his hands up. Yeah, that's Micah. Just a quick question. Did we answer the question from a few meetings ago as to whether this needs bi-directional communication or unidirectional? Like, is it always request from one end to the other end with response going to the other direction or sometimes does the other end need to initiate? I see and this document denotes some cases where the bi-directional communication is needed or highly desirable. So yeah, this is one of the also design considerations. Right, so we use the, sorry? Sorry, go on. Yes, please go ahead. Yeah, I mean the only other, or not the only, another big question to answer is whether we want to go to rest instead since that's what the new clients use. Right, and okay, there isn't like subtopic in this document also. Okay, let me just go to the problems and yeah, I'm not, I don't think we will come to any conclusion or any solution at this call and this is just a thought for the next calls and next meetings. Okay, so the first, the first problem with the existing protocol will lack, we seem to lack, one of the messages that will tell the execution client that the beacon block, the consensus of this beacon block is valid, is validated, which is obviously required because if the execution payload of invalid beacon block will be stored and served through the users chasing RPC to believe there's some bugs in the services and software. So we need this message, we need this explicit message because we also have a set head message that tells what is the head of the chain, but not every block is, becomes the head of the chain after it gets imported. So we need a separate message to signify that consensus is valid. So essentially a commit after initial processing. Right, so it will look like the new payload is sent to the execution client, it's being processed and while it's being processed, it will receive this new type of message that the beacon block, the consensus block of this payload is valid or is invalid then, the payload will be either discarded or persisted after the processing. Yeah, the next thing is we will, we have like several messages that are causally dependent like a new block, like new payload and the set head and this new type of consensus process process message. So, and current protocol relies on the assumption that all causally dependent messages will be, the order of all causally dependent messages will be preserved on the consensus side and on the execution client side. So they will, they need to be pipelined, which is like, which is just bug prone. So we might want to reduce, we might want to just release this assumption to get rid of it. And in order to do this, we will have to, this like execution client will have to store some state of the messages received from the consensus client, like if it's received the new payload, it can then receive this set head or consensus process messages and the order of these messages will will not be preserved, then it can gather it and gather this whole information in this kind of state in this kind of cache and then decide what to do with the payload. So this is one of the things. Also, the next one is HTTP overhead, which requires the new connection each time the request is being sent. Also, we can't do asynchronous communication with HTTP only. We can do this with some kind of techniques that allows for this. So we might want to use something like WebSockets, which opens a way to bi-directional communication. And like the last use case, it's a failure recovery. Let's assume that the execution client just crashed and it and the consensus client stored, persisted some block while the execution client doesn't have the failure for it for it. So the payload hasn't been persisted. So it starts up and with the and the consensus client will send like the next block, the block which execution client doesn't have a parent for. So and it will and according to the current state of the arts, it will the execution client will have to go to the network to pull the state and to continue the execution, which is suboptimal. We might want to like for more like want to look at more optimized scenarios where like the execution client starts and sends the status message with the head of the chain to the consensus client, then consensus client decides what to do. If this gap is only one or two blocks, it can replay those blocks without making execution client go into the network and so forth. Also and yeah and the last like that was the last use case and the overall thought is would be great to have it extensible. So in the future. So yeah, it would be great to design a protocol that can be extended with new messages with new use cases without need in design and the new protocol because we have some restrictions. That's it. That's it. You said that for the failure recovery, the execution client can go to the consensus client and ask for the last two blocks. Does the consensus client keep? It can send the message and say here is my head and consensus client can decide what to do. It can ignore this message and execution plan will have to go to the network to pull the state or to pull those two blocks. Yeah, that consensus client might store like the last few blocks in memory or something and so you can opportunistically ask, Hey, by the way, do you have this because I know your local? If so, give it to me. If not, I'll go find it myself. Right, right. This this kind of behavior is what could be like more optimal than just going straight to the network. Does. Okay. Yeah, that's fine. Yeah, it's even if it's these blocks are not stored in the memory. They are stored in the database so they can be replaced. But not the bodies, right? You still have to go to network for the bodies? No, the bodies are also stored. Am I not mistaken? No, I mean, the bodies are certainly in the beacon block. But you could imagine using the execution engine locally to store the bodies you don't redundant to there. But I mean, we already have a state blow problem. This feels like we're doubling that. Is that my incorrect? Like if the client and the execution plan are both storing full blocks, full bodies, then the only difference is just the state, which is only like, you know, a quarter or a third or whatever of our total state blow problem. I guess that at some point, like, beacon block clients will not store blocks beyond the weak subjectivity checkpoint. Okay. Yeah, that makes sense. Yeah, the failure recovery case is one of the cases that depends on bi-directional communications. Also, it could be a sync process. We can, if we need some rich scenarios for state sync, it could also be relied. It could also rely on the bi-directional communications. Right, I think the one thing missing from this document is potentially messages required to communicate during state sync. Yeah, it's been mentioned like in the last section, but just briefly. Okay, gotcha. So just one small comment from me that each HTTP request requires its own connection that is avoidable. There are ways to have persistent HTTP connections. Not sure if every library we are using supports it, but... I see, I see. Okay. Yeah, so we can use BAP sockets, which is already supported by... Yeah, I see. Yeah, but even there's a keep alive in HTTP one header that allows you to keep the connections first until the time is up. Oh, let's see, cool, cool. Yeah, yeah, yeah, great. Good point. I was thinking about HTTP 2, but it's... Yeah, but why not use WebSockets as they are already supported by clients? I feel like WebSockets is... Like, my gut tells me, given the like the problem you laid out, WebSockets seems like the way to go just because it's easy. Like, yes, you can do HTTP keep alive. I'm not too hard, but people aren't familiar with it. Libraries often don't support it out of the box. You have to, you know, fiddle some bits. WebSockets are just kind of just out of the box. They'll do exactly what you need. They'll give the connection alive. They'll let you know when the connection dies. The connection won't die randomly, like with... Like, you could have a timeout in theory with an HTTP. Just keep alive. And they give you, you know, that bi-directional communication so you don't have to just open, you know, have a... You don't have to run an HTTP server on both clients. You just have, you know, one of them's a server, one of them's a client. That's how you establish the connection. And then it just runs from there. So maybe you... One last comment from me. HTTP 2 connections are also persistent, but I agree the WebSockets are probably better for our use case. Maybe you mentioned this. This is one of the reasons... Another reason that you might want bi-directional here is async processing of insert block so that the execution layer can tell you once it's done rather than you waiting, right? Yeah, that could be... It can be done with HTTP, like servers and advanced techniques. Or anyway, yep. How long is that expected to take kind of in the worst case scenario? Like, how async is that? How long does it take to process an each one block? Okay, so like 250 milliseconds order of magnitude? Yeah, maybe like the worst as block is sometimes... Yeah, sure. I was wondering if it was like... We have to worry about like HTTP timeouts kicking in at two minutes or something. Yeah, no, I don't think so. So maybe that's not actually a design use case. So, but is Gary's comment correct and that's the primary reason that we want bi-directional? Is the fairly recovery case? Yeah, but potentially other cases that we might not identify this far and, yep, async runners and communication would be implemented with these bi-directional communication channels as well. Also, I don't want to speak about this today, but if we will looking into redesigning the protocol, we might also want to look into encoding, suggesting it also gives an overhead. So it's better to use some binary encoding and then we can ask whether to use SSU or RLP or whatever else. Yeah, I mean, I think that's an important consideration that we already have like two protocols. That's all the same thing, which is basically talking between components. And the more we have, the more we increase the security surface and how it's become tricky. And it's just annoying to write a client. Like if we have WebSockets, JSONRPC, HTTP REST, and maybe GRPC, somebody will soon mention that's a burden for developers, right? Isn't the idea here correctly wrong that we're, this conversation like WebSocket, JSONRPC, or whatever would replace HTTP REST in this client or no? So those are the user-facing APIs, which are defined in that RESTful HTTP, which there's, this is independent of that and should be discussed independent of that, but the fact that that exists in the stack already. Yeah. One question from me about the payload size, wouldn't, it's a question, so I don't know, wouldn't it be even like better to keep it as JSON, but just enable some compression methods than rather than doing binary? Not sure which one would produce smaller payloads. Yeah, also good point. Because we deal with a lot of numbers, binary will almost certainly be smaller just because JSON numbers are gigantic because they're strings. That being said, JSON does compress a lot and you do gain a lot by compressing it. Yeah, we do WebSockets support like something like any, okay, so it could be on top of WebSockets version. Yeah, WebSockets is just bytes on a wire. You can compress WebSockets text messages with JSON. The support in servers was a bit hit and miss, but that was a few years ago. And yeah, it's surprising what you can get away with when you do JSON. Like very surprising. I used to do market data over JSON. Yeah, be careful about discussing modifications to the payload format to something that's kind of opaque if we don't necessarily need it. So I'd want a few numbers before we swap that. Yeah, I mean, the other thing is in the standard API, we've started using the accept header to negotiate content types. So you can get an SSZ formatted block or state, for example. But the default is JSON. And that's really useful to be able to upgrade and say, hey, I support SSZ and I want to save some bandwidth or whatever. Does anyone have an argument against JSON or sorry, against WebSockets here? I see a lot of people saying JSON WebSockets on grids. Anybody disagree with that? I think it's probably a good fit here. The only concern I've had WebSockets in the past is that it doesn't always go through. I've always done WebSockets with a plain HTTP fallback because inevitably you find firewalls and things that just don't do the WebSockets upgrade or kill off the connection regularly. But I don't think that's really a design consideration for us. That's a more public website type stuff. I'd just echo what Jocic said in adding another thing. I think that we should work through the design considerations that Mikhail has placed forth here and at least fully validate that we really need the bi-directional before committing to taking on another protocol. So WebSockets, I believe, are implemented in all clients right now, all major clients. For the JSON RPC endpoint, you can do WebSocket or HTTP. So I don't think, for just the WebSocket part, I don't think we'd be adding any new technology. If we did rest over WebSocket or something new or I guess rest over WebSocket, maybe not quite. Anyways, if we did something new on top of WebSockets, that might be new, but JSON over WebSockets is already new. On the execution engine. Yeah, that's true. That's true. Yeah, the execution engine only. Yeah, but libraries that are to implement clients of JSON RPC are supporting WebSockets, I guess. I mean, like WebTreeJ, for example. Kirish, Jocic, do y'all have WebSockets and foundations in on the Nimbus ETH one side? Yeah, we do. I mean, it's not a problem that way really. It's more that it becomes an incredible zoo. All right, so you want to use this beast, right? And then you have to have a WebSocket server running and an HTTP rest server running and an HTTP JSON RPC server running and a DevP2P port and a libP2P port and the Discovery v4 port and the Discovery v5 port and that's... Well, it shouldn't have v4. Well, you know. So, Simon, that's six already. And I just ripped them off the top of my head. And that's where the complexity lies. It's just difficult to, or not difficult, it's just a lot for even the user to manage and set up and imagine the firewall rules for everybody and blah, blah, blah. So, I mean, it's not really a question of which libraries are available because there's a ton of them. But each library also brings in its own dependencies, own configuration complexity, the overhead to learn those frameworks, really? Like the ins and outs and the details of WebSockets versus plain HTTP versus rest over HTTP, which has a different framing and so on. Like, that's more of what I'm talking about, like the complexity overhead in general, not whether a library is available or not. I agree with all of those, the latter point you made. For the first point, though, I feel like even if we did HTTP here, you should still be exposing this on a different port than the public facing JSONRPC stuff. So, at least for the firewall stuff, you will need a new port, I believe. Or you should have new. I agree with the latter half of what you said, though, that it does increase complexity just adding another frame. Yep, thanks everyone for your valuable inputs. I guess we should think more about it for making any decision and look forward to the use cases to potential use cases that we might see in the future before we are binding with one or another solution. What can reduce the complexity of design and implementing this protocol is that these two parties that are communicating why this protocol are going to trust each other. So, but I don't think it reduces it significantly. My two cents on Danny's question as to whether we need bi-directional or not. Again, my gut feeling from the conversations I've been overhearing is that there's enough situations where we think it would be valuable, that it feels like eventually we're going to need it. It's one of those things where, sure, you could argue that any of the individual examples, we could get away with not having bi-directional communication, but they'd be a little bit better with bi-directional. But I feel like there's enough of those that my background tells me that eventually that's just going to continue to pile up and you're just going to end up making sacrifice after sacrifice after sacrifice if you don't have that bi-directional communication. And whether that's long lived HTTP or WebSock or whatever, I think that is less, but I do feel like bi-directional is, again, just from what I've been overhearing feels like the right way to go. I guess the question here is, isn't kind of the future that the execution clients becomes more and more minimal in its feature sets? Will more and more just be there for verifying blocks and maybe producing blocks? So I guess I wonder if that is really true. If the long-term isn't different, should users really long-term rely on the ETH1 REST API, for example, to get their data? No, I mean, they should use one API and that's probably the ETH2 API because that's the only one that can get your consensus. Yeah, but there is a bunch of API that's exposed by execution clients. And I guess a lot of services are using it. It will not be easy to replace one with another or to move it to another endpoint and another protocol. I mean, I think we'll just have to do it long-term because if we don't do it, then it will always be like, I mean, it's always going to be pretty weird. Why do I, do you really think in the long-term users will want to install like two separate pieces of software, configure them and so on? Like one of them should be really, really minimal in my opinion and be more like a library. What I've envisioned, which I know not everybody agrees with, is that over time, many years, we will probably move into more pieces, not less, but they will become packaged better. And so from an end user's perspective, you double-click the installer or whatever and it installs three pieces of software, like three services on your host. You don't know that because you're a user and you just double-click the thing. But there's three pieces of software and one of those pieces is basically like reverse proxy. It's just a thing that you connect to as a user and then that connects to the two backend pieces. This would be the more traditional architecture and I think packaging matters a lot there. Like we do want it to package into a single double-click for users, but for more enterprise-focused customers, they will benefit from having those individual services that they can talk to separately. Yeah, I agree. I guess I disagree in that I don't see why, except for compatibility reasons, you'd want to talk to the ETH1 client directly in the future because you care about, not about some random state, you always care about relevant state as an inconsensist state when you ask questions about Ethereum. And so it doesn't make much sense to me in the future to ask the ETH1 client, except if that's the only thing you can do because you have been written before this existed. And so I think the primary reason I would want to do that is because each client has different feature sets that are added on beyond the base feature set. And your execution client may have a particular feature that you want like tracing or something that other clients don't. And because it's not a standard feature, you can't access it through the client. You need to go directly to your client because it's added a special feature just for you. Or another mind, for example, has plugins. And so I can write a plugin for another mind. The client does nothing about that. And so if I want to talk to other minds, I have to go directly to it. Yeah, and the execution engine does know what the head is. So for many of the things that you do on a query, it does have an idea of consensus in that sense. But again, I want you to package this up nicely and have just kind of a standard proxy to get to it all. I think the common end user doesn't really have to think about it. Yeah, right. So every execution client will be accompanied with consensus client, which is, which is this through and by. Yeah, also for this use case is probably where we need the consensus data as well. So it could be like the unified vocate that can request data from the consensus and combine them with the data from the execution clients. Potentially, this is one of potential design solutions here and then get back it with this data to the user. So it will be one interface. It will does all the things. Okay, so let's stop here. Thanks everyone for this discussion. Do we have any other research updates? Let's move to the plans for Q3. This is the first day of Q3. I think it's worse. Like Shera said, like speak about plans for a little bit because we're expecting London in a month. We're expecting Altair in a couple of months, right? Or something like that. And with all this in mind, we're expecting more focus on the merge during this quota and regarding the plans for this quota, we have, okay, so far we have like beacon chain or consensus specs are in the feature complete state. So definitely there will be some refinance bug fixes and some additions like on the networks back and if the API changed as well, but in general, we have the design, we have the transition process so far. And it makes sense for Q3 to focus on the execution clients specs on the EIP, so on the consensus API. That's what we are going to do in this quota. Also, it would be great, I think we will have the more test nets coming in the second part of this quota. So that's the high level overview on the plans. Yeah, I think. Yeah, the consensus specs will also rebase on Altair relatively soon and also integrate London changes the execution payload, which I think would include something related to 1559. And also figuring out standard how we're going to be testing. I think we already have consensus side test vectors being generated. We'll be extending that and then figuring out how the execution layer leverages the existing tests and extends them in this new context. I think that's something important to figure out in Q3. Yeah, they'll agree. We need that. By default, all the EVM stuff should just continue and they should operate independently, but I think we just need to kind of touch it and make sure that we're happy with the way things are structured. Also, at some point, we're going to stop making the separate merge calls. Probably we'll have one or two after this one and then we'll keep discussing merge during the all core devs and the proof of stake consensus call, depending on the part, which has been discussed, which is going to be discussed. So that's all the plans. Mika and I've been working on a check, like very high level checklist of all the things that we'll share soon and probably put in the PM repo. Okay, any questions or suggestions to the plans? Any spec discussions? Okay, any other discussions? Does anybody want to say anything else before we wrap up? Okay, thanks, everyone. See you on the different places soon. I will not make the next call, the proof of stake call. So I'll see you. Thanks everyone for coming. Thank you. Bye bye. Okay, thanks everyone. Thank you. Hey Danny, is it possible you already leave? I was wondering if it's possible to have the same Zoom server for both calls since they're back to back. Just to be a little more convenient. Okay, actually, this is like a link provided by us, so it goes in there and they'll call that Danny manages, it is from a game foundation. So they may have different links. It's not a huge deal if it's complicated or hard, just it would be convenient. Right, this call will be deprecated soon anyway, so one or two. Okay, I do have the new link I believe Danny sent in the morning email today. So I will thank you Michael. Okay, thanks. Bye bye. Thank you. See you later.