 Let's just get started. Yeah, welcome to the Merge and Primary School, number six. Today is going to be pretty straightforward, I guess. So we'll start from implementation updates as usual. There is, yeah, there is update from my side. I'm working on the transition process implementation in TACU because the PR has been merged into this big repo. So I feel like I'll finish this implementation like at the end of this week or early next week. And we'll try to, the next challenge will be to set up the, locally, the preferred chain around the big and chain, do all these steps that we, do all the steps that we did on the mainnet already and, yeah, finish it with the merge transition process. So probably I'll just, yeah, likely reuse the scripts that we have from afterionism, which is great. And, yeah, we'll use the carry PR, which already contains the transition logic in GAF. So we'll see. This is like the update from my side. Does anybody else have an implementation updates? It could make sense. Everybody's working on the corresponding hard forks. So, OK, cool. Any questions here? Great. So let's move to the research updates. I have, like, a couple of things to mention and probably discuss. We have Justin on the call. Thanks for the don't. No, I can give a quick context. I mean, 24, 72, just Justin is known for going through polishing and merging and making sure that kind of the specs conform in terms of naming conventions and structure and all that kind of stuff. So he's done that on the recent merge specs. Has been a bunch of review. I just did. It's now passing CI and has incorporated the feedback. So I gave it a plus one this morning. If anyone wants to take a look at it, please do. Otherwise, I will probably merge it tomorrow. Yeah, cool. Yep, so I'll just drop the PR just in case. So yeah, there are some renaming and reorder of the fields in the execution payload. This is one of the, like, not substantial, but the biggest change, I guess, right? So it's some renaming some of the methods. OK, cool. So then the next one is the rendow PR. So there is a PR which adds the rendow to the execution payload as it's been discussed on the previous call. So it passes the rendow into the difficulty field of the execution block. And then it will be, yeah, as the difficulty already presented in the EVM context and can be used by difficulty of code to add, yeah, and this is the way the rendow will be exposed in the EVM for the applications. We will support, or better to say, will not break the existing applications that uses difficulty as the source of randomness by this change and potentially, like, after the merge in the cleanup work, we will probably and will likely and will want to do it in the following way. So the rendow will be sent directly to the EVM context and will not be embedded in the execution payload. So and like the reasonable question here, what if we do this at the point of merge and it's the minimal version? The counter argument to this is that we don't want to change the, we don't want any changes in the EVM, like, in this first merge iteration. But anyway, what do we think how much of a big deal it will be to directly embed rendow into the EVM context now? I mean, my argument would be that we have to change difficulty. It has to go because there is no, it needs a new context. And so defining it as a constant or taking it as a value off of that RPC has probably negligible complexity difference. And so might as well do it at this point. Is that the question? Yeah, that's the question. So is it a big deal to directly send it to the EVM? Yeah, I mean, the execution layer gets certainly gets context and directives. And so I don't think it's a big deal, but I'm not managing the software on that side. Yeah, this is the question. Yeah, mostly to a net client of limiters. How difficult is to change the? Well, and again, it will have to be changed. It might be like if post-merge difficulty equals one or if post-merge difficulty equals this value that's been passed on. Good to hear from Tomas or Ryan. Because we're changing it already, I think I think it's fine to do it at the same time. Yeah, the difference is that you don't need to change the EVM at all by using this route by just embed and be ran down into the difficulty field. Why? I don't think that it's like a big amount of work either. Probably some testing to cover this case. And Tomas in the chat, so that sounds OK. So I don't know if I mentioned this to you, Mikael, but you said at some point you wanted to change it to something else. Or is this embedding it as the difficulty field, kind of the final place you expected to rest? No, it's not kind of final place. So this is just not to have a deal with the EVM at the beginning. This is just more workaround than the final solution. Why is this a workaround? I'm confused. Yeah, why is this not final? You mean to use difficulty? I mean, to use Randall in my mind was Randall in the place of difficulty in the payload. Right. Is there some reason we would want to expose the Randall in the EVM through some other mechanism besides that opcode? I mean, does it make sense? Yeah, is there any reason we would ever have to return to this? Like if let's say we set, I forget what opcode difficulty is, but that opcode now just returns Randall value. Are we done forever? We never have to come back to this. Like the EVM now has random number generator at the end. Yeah, I mean, if that value was hardened, say with a BDF or something else, then it would be subbed for that new hardened value. But yeah, it's 256 bits. It's not clipped. It's like the full Randall mix. Yeah, we're just talking about the way the Randall mix is put into the EVM is exposed by the EVM, the source of it. Is it going to be the part of the execution payload or is it going to be like side value that is used by the EVM without being put into the blog? I think it makes sense to put it in the payload. I don't see an issue with that, even as a final destination. I mean, the issue is that people start depending on that it is the Randall and not a random value because the Randall is a specified source of randomness then. Is that a problem? Can we use a different source later on? No, I don't think it's this problem. So we can call it Randall anyway and use another source of randomness. So Randall is a kind of abstract thing here. Because if we just want the random value, we could even take the hash of the Randall or whatever, right? But if we say that it is the Randall, then people will start using it for other correlations as well, not just as a random value. Wait, you can't do anything else with Randall, can you? Like there's nothing. It's already the Randall mix. So it's already a hash and XOR. So it's not actually like the signature. So it couldn't be used for... You couldn't do like a signature verification on it or some other thing. But if that's... I don't see that danger. Like even if it were the signature, there's nothing in my opinion. It's just the signature of the slot. Well, people might do some sort of validation again. Yeah. Like on-chain validation of stuff and then you take that away and you're like, oh, the Randall is now this other random number. They're like, oh, my contract's broke because I was using that to verify blocks. Hi. Yeah, I mean, obviously that's really stupid. I'm surprised if you could do any on-chain validation because it doesn't contain anything relevant. If it were the signature, you could put your entire contract and if Randall verifies and then all the logic after that as a really stupid mechanism. I think it's because it's not a signature, so... Right, right. So it's the XOR of the hash with the previous random mix. So there's not a weird... Well, someone can always try to figure out something weird to do, but it's... I just don't see why you would do it because there's nothing useful you can do with it. It's not a signature of a block. It doesn't sign any meaningful data. And it's not even a signature. The value today is not even a signature. Right, but you can trace it back to a signature if you want. You could make a proof that it's back to signatures, but they assign something completely useless. So it's not like... We could call it random. Well, I think that's the point a little bit. The fact that we can't figure out in five minutes on a call what people will use it for doesn't mean that it won't could use that way if we say that it is the Randall. That's all I'm saying. To, yes, explain, what if I made a contract and the purpose of the contract was just to upload little bits of beacon blocks and states to it just to add the information in there. And I said, oh, you know what? Don't worry about uploading the Randall because we already have that because it's exactly what the opcode is. Like it's a contrived example, but there's an example. Yeah, no, that's fair. That's definitely fair. I would call it random. You mean the opcode or... Like if difficulty is going to get in any chain, I would call it random. Okay. Okay, so what I'm a bit worried about here is that we're changing difficulty, which was like much less than 30 bytes. That's it will be 30 bytes after the merge. So it will need to be checked whether there is no overflows in the execution clients. I believe... But I think... I think the execution clients after the last call and I think all of them got back and said it's 256 bits. I can go verify that, but no one said it was not. But if they're doing a total difficulty calculation and summing that for some reason, they could tell weird overflows, even though they shouldn't be using total difficulty really anywhere after that point or anywhere meaningful after that point. I see you're saying there's a potential source for bugs during the merge that we need to watch out for. Right, yeah, this is the potential source of bugs. And if we set like difficulty to zero or to one, there will be no this kind of source of bugs then we will have to pass right now aside of the execution page. That's like the only thing. And also if you set difficulty to one, you might accidentally like still kind of use longest chain rule and it be correct some of the time. And you don't wanna accidentally be correct some of the time because then you have an attack vector. All right, you can set it to zero anyway. Okay, so this is the PR that just, yeah, so substitutes difficulty with the random or random value and we can rename the opcode after the merge, right? I mean, I'm renaming the opcode as a social thing that has nothing to do with the code and like just a variable name and code somewhere. So I would. Yeah, and then solidity. I just wanna highlight the like anything that like this probably would need to make its way into being an EIP but that's maybe a whole separate conversation as to how this shows up in that process. Right, this is like the next item on the agenda. Yeah, how do we document all these changes? Great. Okay, so if we're finished with the rundown, if anybody like wanna look in DPR, it will be very much appreciated. So I think there will be no walkers we can merge it like on the next week. So please take the time if you have it. Take a look. Yep, I think we can move on to do this back so the execution layer. So, yep. Tim, Baika has raised like very reasonable question. But look for, yep. Yeah, so not, yeah, I guess one thing is figuring out, you know, where do we want to have the specs for this? I think, you know, on ETH1 currently, EIPs is the best place. There's some work happening on an ETH2 style spec for ETH1 but, you know, it's not ready yet and I wouldn't necessarily want to block on that. I feel like the even more basic than that though, we probably need just a sort of list of like changes or open questions that we need to answer for ETH1. I think that would be useful. Like obviously for us that kind of have a broad picture of like this is all the stuff we need to do. And I think it's also something that will become increasingly useful as like the community asks one merge to have some list of like, well, these are the things we need to solve. So I'm happy to help put that together but I'm curious, yeah, what do people feel is like the best format for this? Yeah, does that generally make sense basically? Yeah, I mean, right now the Ethereum specs are kind of the summation of the L paper and EIPs and stuff. And I think that's attempted to be captured in that ETH1.0 specs repo. So even if there is an executableness there, maybe that should still be kind of where the execution layer specs ultimately go. So I think assuming we don't have some sort of executable spec on that side, I would argue to have some sort of selection of EIPs that dictate this change and anything that creeps into the EVM and then put them into a fork and ETH1.0 specs. It's probably easier said than done though. It could be quite messy. And we did do a informational EIP for the big chain launch. It might make sense to have an informational EIP that just kind of explains and locks down versions of stuff. But I'm speaking outside of my domain at that point. Yeah, I think we can have like an informational or meta EIP that's kind of a description. And I think the one thing that makes this different from like a regular hard fork is there's a lot of non-consensus changes that I think are important to document. Like, you know, all the stuff around syncing, for example, if like it's obviously like a massive part of the merge, but it's not like something that's actually like a hard fork. So I think those are also the type of things we wanna make sure we kind of have a list for. And I think those can all be EIPs as well, right? Like it's fine. We have, I think EIPs for some of the syncing protocols definitely not everything, but we can open like networking EIPs for this stuff. So I don't know, if that's the format people wanna use until we have something better, we can just have it in the ETH1 specs repo and use EIPs as like the templates of the various changes. So I'll keep this brief since I think everybody here has already heard my arguments and we don't need to spend too much time on it. EIPs is a specifications repository has a place to keep technical specifications it is not a place to document things. Like it is not a documentation repository. There are far better tools for documentation that we should be using. I'm a huge fan of documenting all this getting everything written down. I'm not arguing that we should not document it. I'm just suggesting EIPs repository is not the right place for non-technical documentation. Like if you want to write a spec, like just a technical spec, EIPs is a great place. Everything else HackMD, ethereum.org, like a Wiki, GitHub, anything. So I think we can use the ETH1 specs repo to do this broad documentation. If even, if like you think it doesn't make sense to have a meta EIP or informational EIP, like fine, we can put that in the ETH1 specs repo. But this kind of documentation is kind of a wrapper around several technical changes, right? Like what we do about difficulty, what we do about thinking and so on. And it's like that and no, do we want that list? Each of those technical changes that have been associated EIP. And if people want to do that, I think that's fine. But it's just good to know kind of already because maybe we can start drafting some of these EIPs and putting together something in the ETH1 specs repo that says, hey, this is the merge. Here are the various EIPs, here are the things we still need to figure out but haven't gotten around the right in the EIP for. As a first step, Tim, maybe UI and others should black box the functionality from the beacon chain and then enumerate everything that we know is, that we've already kind of specified as changing and know that we will be specifying as changing even on sync, op codes, that kind of stuff. And then once we've enumerated it all, figure out the home for the different things. Okay, yeah, that sounds good. I can follow up with you and Mikal and other people are trying to get a first draft of that. Great. Yep. Like I think we need a kind of timeline with check boxes, right? Yeah, I'm arguing for the check boxes to avoid the timeline. So I think we did this fairly well with 1559 where we have this checklist because there will be increasing pressure at like the worst time when stuff is like 50% ready, people will start asking, like when merge and being able to say, well, look, here are like 10 things we still need to figure out, both for us, like I think first and foremost, but also for the community, there's value in seeing, like, oh, the consensus changes are done but like sync is broken or, Jason RPC is broken or whatever, right? Yeah, I definitely would not put dates on that document and kind of use it as a shield against having to provide dates. Okay, I see, yeah. Okay, as for the EIP process, I think it doesn't make much sense to like put every different part of the execution client changes into separate EIP. Probably we can use the approach that has been taken in the click EIP, which just describes all the things in one document. What do you think? I mean, once we figure out the sync process, once we figure out, yeah, sync process definitely will be a separate spec as we have- But like all the consensus changes, maybe on EIP one? Yeah. Yeah, I don't have a strong opinion for against that. Yeah, well, the consensus changes should be like in one place, I guess. The argument for having lots of small EIPs is that they tend to go a lot smoother because they changed it as small. What happens when you have a large like monolithic EIP is you get a bunch of bike shedding on some like minor piece of it and then the entire EIP gets kind of stuck in the mud. Also you end up, because conversations tend to be centralized around like discussions to link, you get like this just massive thread that everybody unsubscribes to because there's just too much talk about that bike shedding piece and it's very hard to find the actual discussion. So my recommendation is try to split up into as many smaller EIPs as possible just because it really does make the process go a lot smoother. EIPs that are like a page long go through almost instantly, whereas EIPs that are 10 pages or so take way more than 10 times as long to get through. Yeah. I think I would probably agree but I wanna see what those items are that actually need to make it into an EIP before we decide. For example, like this difficulty change being in its own EIP that makes sense. It's probably like a one pager and it's pretty easy but it's unclear to me which things are gonna make it into an EIP yet. Yeah. I figure out where to draw the line is definitely an art. Again, just I recommend caution. I've seen a lot of people try to do monolithic EIPs and I don't think I've ever seen it go well. We'll call on the artist of Micah to give us a hand. The artist of Micah will be like, make this shorter. Okay. Yeah. And for the moment when it makes sense to start writing those EIPs, I think that we should figure out the transition process first to be to make sure that nothing like substantial will change. Also, it would be great to get the sync process figured out too. Now we can just put everything together that changes all the changes in the consensus for the execution side and put them together and see what could be decomposed and put in a separate AP and what could not be decomposed really. This is just my thoughts on when and how. Probably we can start like not wait for a sync process to figure it out and change something like in the EIP drafts later on. Yeah, I think so. Like Danny said, figuring out what are all the big kind of themes in a way. And that'll give us a good picture of like the ordering and when it's the right time to actually formalize different parts of it. So, and we're starting and we can get this check boxes like now, right? Yep. I mean, yep. So we like get the document is check boxes in a short time, in short term. And then... Add links beside the check boxes in the middle right. Yeah, and yeah. And we already have some links beside them actually. So those check boxes for all the checked, okay? And then once transition process is like prototype, prototype we can start thinking about EIPs, right? Yeah, I had this like research doc that I gave up to keep up to date with the list of leftovers. This difficult thing was one of the last leftover. I can double check with myself, but I think, yep, now the transition and the sync and that seems to be all, double check. Check it out then. We're gonna use it as a source for this like document with check boxes. Okay. And don't forget the API. Right, you're right. The API, yep. Okay, so if we... Yeah, I think another big item is probably testing and how test generation looks in this unified front and whether everything's kind of separated into these layers for more of the unit testing and then what things like Hive and other integration type tests look like. But let's maybe not solve that one today. Yeah, definitely. When I was like saying that this difficult thing is the last one or probably the last one, I was referring to the research open questions. Right, right. So we seem like we don't have them. Okay, if we're finished here, we move to the open discussions. By the way, I forget to ask, does anybody have any other research updates? Yeah, if not, I have like a small one. I've started to write the Consensus EPI improvement proposal. This document is about improving the communication protocol. Planning to finish it next week and share with everyone probably it will take more time. So we'll see. So this is like in progress too. Okay, any other discussions? Anything else? Any pro announcements probably? Okay. Thanks everyone for coming. It was pretty short. Oh, that's nice. Yep. So we have Alterer upcoming and I'd like to just mention that at some point we need to rebase onto Alterer. Not everyone here is affected, but it will affect those that are implementing the Merge. So we should try and time that so that we can move and sync. Yeah, sure. Thanks for that. Yeah, there will be some other changes like cleanups and probably new Consensus EPI to catch up with after Alterer. Yeah, also regarding testing. So I have like kind of plan to finish with the transition process and then get back to the work and this back and tests in particular. We'll need some kind of test for the transition process as well, which will involve both consensus and execution sites. It's gonna be like interesting thing to do. We have a bit of these kind of like fork integration tests for Alterer from Phase Zero Alterer. So we can at least use some of that as a basis, but how we exactly integrate and or stub the execution site and those tests we'll have to figure out. Okay. Thanks, everyone. We'll see you in 25 minutes. Thanks, everyone. Thank you. Was this the call that last week, two weeks ago, you were arguing about API or is that the ETH2 call? ETH2 call. Okay.