 Hi. Okay. One moment and we're certain. Let me just go to the agenda through the chat. Then it was on the way. We were in a separate Zoom room, but he's moving over. Oh, cool. That's my fault. I forgot that there is a room shared in the invitation. So I thought there is no one. So that's my problem. Okay. Hi, everybody. Let's start. Welcome to the Merge Implementary School, number two. There are apparently some problems with Berlin, right? So, but yeah, probably some theorem or devs can attend this call, but let's just go through the agenda and discuss some items that we can't do without them. So, okay, so let's start from like the first one. We have this new terminology. The key replacement here is that we replaced the application term with the execution one. So there is the execution layer instead of the application layer. This is to not confuse people with the smart contracts and applications using them. So applications built on top of the main ads. So that's what's the purpose of it. Also, the term layer is arguably not the best one for the execution and consensus because they're not actually layered. And yeah, so here we can think more about it. I don't wanna spend like much time discussing this, but probably it's better to call it subsystems or engines or whatever. And yeah, let's just, if people have any ideas, just drop them in this court and let's probably discuss it offline. So anything on the terminology? Any questions here? Let's just move on. So the execution discussion. I was just going, my initial idea was just going through the key parts of the execution stuff and ask for some updates or for understanding for probably questions from if there are main ad developers. So that's the initial idea. So probably we can do this anyway. So any questions to the like communication protocol? Where's the most updated link to the communication protocol? Is that the Ryan and the fact that you're maintaining the channel? Yeah, right. That's the one. I also put the link to the previous one. The new link that Ryan has put to the top of the previous document. So that's like the latest one. Anyway, yeah, anyway, this is Jason RPC for Ryanism, but it's not probably before production. So we can, yeah, we can, you know, we'll anyway get to this discussion later. Okay, so who has reviewed or who has any thoughts or suggestions, questions regarding that? By communication protocol, do you mean ETH1 to communication by RPC or anything else? Yeah, so that's, yeah. So anything regarding any, yeah, any questions? I have a question. So I'm not entirely sure how to handle potential issues and error in this protocol. For example, if the assemble block fails or the new block fails or the set head fails, because either the payload is wrong or some internal error happened, I'm not sure how to handle it. The spec doesn't specify it. Anything about error handling? Yeah, right. So there are statuses for finalize blocks set head and for obviously for new block. I mean that you can return false if it's not been done correctly. Yeah, you're right. But assemble block doesn't have this kind of state, right? Yeah, good question. You probably should edit, you know, some kind of, you know, status there as well. So it will be an object and status on the side of it. Right. I think, especially because you can specify the parent hash currently, you can point to something that's just bad. So there's definitely a failure case there. Or something that's not existing. Also, yeah, the other option is to use the errors in JSON RPC as we have them today, right? You mean the result, yeah, and error is in the spec, yeah? Yeah, but okay. Okay, I see, I see the problem. One thing, because I'm not sure if it's specified, it's not explicitly specified. Assemble block doesn't include it. So after assemble block, new block will be called or do we expect that new block won't be called and it can be called just by set head? We actually in general expect that the new block will be called. So, because there is a state transition happening on the consensus side when the block is assembled and it's proposed, it should, yep, the state transition is called triggered and yep, it will trigger the call to the new block method. Okay, so it's assemble block then new block. Right. You can like, go ahead Danny. I would presume get work does not add it into the block tree today, right? Only if they find a solution does something get added into the block tree. So it's kind of similar logic, that's fine. Yeah, but in the proof of authority chains, you would generate the block and edit humidity. So, but yeah, I understand your point here because there is a difference in time, like when you keep preparing the blocks with the assemble block, you can potentially even call it many times, right? With the same parent, because, can you ever? You could, presumably. There's not an immediately obvious use case for that. Unless, yeah, there's not an obvious use case for that. You might like, pre-cache a block to not to avoid executing the same transactions once again. Right, you could imagine like doing it slightly early to have something ready to broadcast and then doing it again, very close to the time of broadcast to see if you've got a better, you know, coin-based output on MEV side, but even then the, I don't know, but that's like not a very clearly good strategy. Just a potential strategy. I have a question, yeah. Is parent-ache actually a valuable, is it worth the complexity here in having like you be able to point to anything arbitrarily to build on rather than just building on the head? I mean, presumably the beacon node keeps the execution engine in sync with what it thinks is the current head. And so if there were a reorg, you would trigger that and then call symbol block. I'm just asking primarily, when you definitely know the head and you can tell the parent hash to build on, but then you're opening up like a functionality requirement on the execution engine to be able to build on arbitrary heads, which I don't know is worth the complexity. Um, that's a good question. So because it could be the case when there is the arbitrary block became the head afterwards. I can imagine this kind of stuff, bit of racing between the new head and assemble block. Or yeah, so what if the head has changed during the block has been assembled? What should happen here? I mean, you even imagine if the head is being changed while the block has been proposed. So this, what will, how will this be handled by the beacon node? I mean, at some point, the beacon node has to make a decision about what it thinks the head is and assemble the block based on that. But the idea would be I began assembling a block, some other subsystem triggers that there's a new head halfway through me assembling a block. I asked the execution engine for the transaction payload, but it's gotten a trigger from somewhere else. There's a new head. Now I'm out of sync on that. And this protects against that case. Yes, I'm kind of that. Yeah, I think consistency cases. Oh sir, go ahead. No, please. Consistency cases are in general really important. Also consider the case where there are multiple beacon nodes talking to the same if you're not. Okay. Yeah, so I would say we need to actually think about any concurrent calls to those RPC. So what happens if we had one set head and second set head? We should probably queue them. That's the last one wins, for example, or something like that in the implementations. That's important. Finalized block is probably not important. New block is probably not important that much. The other relationship between set head and some of the other calls may be important. Yep. So my intuition is that all these messages should be processed sequentially. But what should be... But yeah, new set head and new block are causally dependent. So they must be processed sequentially, but others could be processed concurrently. Not sure if in all cases it could be done concurrently. Okay. Yeah, I mean, I definitely see how a symbol block and set head depending on if there's different subsystems in the beacon chain could get out of sync, and thus the parent hashes definitely immediately. It's a nice quick fix without having to think about things deeper, but it might open up complexities on the execution engine side. But I'm not sure. Okay. Okay, so the symbol block should have some. We have started from the error message. Okay, so let's say... Let me think about it to continue flying probably. Okay, and it's announced here before we move to folk choice and chain management. Yeah, I just want to highlight again, like the current, like with that parent hash in there, and there's not really being any bounds on that, like a symbol block could trigger an arbitrary, not reorg because it wouldn't be changing the head, but trigger an arbitrary, like you have to go and put yourself into this different state to build a block. And so there might be complexity there and it's worth people investigating that over the next week or two, so we can talk about it again the next time. For now, I'll just raise the error whenever the consistency check fails. And then from there, we can change implementations to actually handle the case. I have a question. So if we have finalized block, this probably affects what parent hashes can be supplied to new block, doesn't it? So we cannot organize finalized blocks. So we do have some constraints on this parent hash. Right, right. So arbitrary in the sub block tree, sense finality. Yeah, but I would not enforce these checks on the execution engine because this is the responsibility of consensus. And in some cases probably there will be the case where consensus like switch from one finalized checkpoint to the concurrent one. This is like a case of some forks or whatever. So locally, you'd never revert finality. Like, even if there was, well, it can't happen locally. So even if there was- Well, it can happen with manual intervention, right? You might end up on the wrong fork and then you change it. That's yeah, the node would never do it. Yep. So I have a question. So finalized block, how many, how much height of the chain might not be finalized yet? I'm not aware of that because it's important for state management, pruning, implementations, things like that. That's a problem that is raised here if what I'm aware of from each one side. So practicality of this problem is how big this un-finalized chain could be that we could reorganize. Right, so in standard operation to epochs worth, which is 64 blocks that depth. So that's the normal operation. So you get in the happy case, you get pruning on reasonable depths, but you cannot aggressively prune if you're in a time of non-finality and you could go days without finality. So there's definitely a variance that has to be handled on pruning. So the risk regarding non-finalized state is what happened during Medasha for a couple of days. The chain didn't finalize and we had many, many forks. And in that case, technically you are, you can store all the forks in your client because maybe they will become valid. But if one fork has only a few votes that might not be worth it. The problem is what you do if there's a new block that builds on one of those forks. You kind of have to validate that block because you also later need to see if attestations to that block are valid. So that I think is the problem. But I mean, I would say like on main net, like we should definitely be prepared for longer like non-finality periods, but hopefully not days. So maybe we can get a more reasonable compromise there. Like days would be pretty extreme. That would be a pretty crazy failure if we ran into that. Okay. So anything else with regard to the protocol of communication between SSI aggregation? Okay, cool. Let's just move to the folk choice and chain management. I know that people started to investigate and that how hard it would be to make the folk choice pluggable and how big of the impact it has on the modifying the chain management of their clients. What I just wanted to ask about any updates and thoughts here and how it could be improved like from many points. So maybe I will start from the mind side. It's actually pretty easy. We had it fairly broken down already. The problem there might be, I haven't investigated that much, is later syncing the network up to the head and then starting it. So integrating the syncing and the folk choice management itself might be harder than just for starting from head. Like for the hackathon we want, it's fairly easy. In the mind. Here we go. So here we have like this difficulty, total difficulty rule for the beginning. And there's part of the graph for it. Yeah. I guess I can give an update if there are no folks from Gaff on the call. Actually, Peter and Guillaume. Oh, there is Peter. Peter unmuted and muted a couple of times. We can't hear you, Peter, if you've been speaking. Yeah, sorry about that. So if the question was what's Gaff's progress on these things? Yesterday we had, I think yesterday or two days ago on meeting with Prodo, the kind of brand through various stuff. I guess the conclusion was that if you need something for Monday, then probably the closest we can give you is Guillaume's PR, which just kind of hacks into, hacks of things into essentially just directly does insert into the blockchain. It hacks around all the internals. We started working on essentially new consensus engine, which does the whole new fork choice rule, but we haven't merged that in yet. And as far as I know, it's not finalized yet either. And I've also started working on the synchronization, but yeah, I'm kind of sidetracked a bit because in order to make the synchronization work, I also need to change some other parts of production gap. And yeah, I'm not super keen on hacking stuff together in production parts. So I just want to make it properly, which means that it's going to take a bit to those are kind of the update. Great. Thanks, Peter. Peter, if I may ask the PR you're talking about, if this is the PR I've seen is following the old spec, it's not a big deal, but the JSONRPC interface is different than the new spec. It's not following the old spec, no. It's, yeah, it needs some updates, but it will be done after this call. Okay. Yeah, so any questions to the chain management and the fork choice? Okay, so this scene process, it's just been some kind of high level proposal in the design, in this high level design doc we discussed on the previous call for how to download the state and do the boxing. If people have any like assessment on that, whether it's viable or not, or some any kind of further inputs and things to discuss, we can do it right now. Okay, cool. Let's just assume that that will work. And talk it later about it. I think we'll assume that that's the not, people haven't quite gotten there. So we should probably bring that up in a while or so. Right, agree. I mean, let's just assume for now that it may work. Yeah, there was a question in the chat in the discord. I don't remember where exactly was it, probably in this court. What, which part would decide on the gas limit and target voting? How this will happen after the merge? So my like basic, my basic thoughts are just, it doesn't change. So the execution engine has this voting mechanism and every proposal will be able to make this as miners do it now. So any other opinions and thoughts on that? Yeah, I'd say by default, it remains exactly the same, which is a block producer, regardless of whether it's miner, proposer or validator does that. Similarly to how 1559 post merge, the block producer would be responsible for paying base fee for transaction and figuring that out in a similar method. I don't know on each one clients today, what's the, how does one access that? Is that in a, that's not like in the get work function call, right? Is it some sort of setting, configuration setting on the client? From what I remember, like GIF has a flag with just the number, which is the targets for the gas limit and it will be increased according to the gas limit formula, each block. So that should be, that functionality should remain. It will then be fine. Yeah. I mean, we can always add methods to change it because I mean, it's a relatively small thing, but I don't think it's people generally want to keep tweaking it runtime. But yeah, if there's a reason to be able to tweak the limits runtime, I mean, it's more than trivial. I mean, it should be able to just add it. Right. And currently if miner wants to like increase the gas limit, it just restores the node, right? With the new parameter. Yeah. Yeah, but so essentially, if you look at may not, generally the gas, generally miners always run with the maximum gas limit that was kind of being safe for the network and it's not really changed maybe once every half a year or so. So it's not like you need to constantly tweak it. Right. I think yeah. After a consensus upgrade, there should know there is no reason like to change this part. So one thing for the next item is slot clock ticks. This is, I guess it's been missed like on the previous call and in the dog, but I think it could be important because there is the consensus part that has the slots clock and these ticks should be propagated to the execution, I guess, because the timestamp of the sticks it goes to the block, to the next block. And it's probably important for transactions to that use the timestamp of code to be up to date with this kind of information. So it might probably require some additional message or command. Okay, well. So you mean transactions that are sitting in the mem pool? They might be validated or they're not as valuable or things because there's logic that's conditioned upon them? Yeah, they may, they might change the execute their like execution flow inside of like transaction inside of smart contract method, it's goals. And also it could be, it should be important for the pending block functionality because you have to restart the block each time the new timestamp is observed. Could you expand on this thing a bit? So what is this notion? I feel like I'm missing something. Proof of stick blocks are only, the timestamp is dictated by the slot and the slot is only every 12 seconds. So there's not the granularity of the time like you would never see a transactions that are hitting timestamps up code that's not on those 12 second boundaries. So it, the execution engine can know about time and can decide what slot it is and kind of use that or it can be told about time and use that. Okay, but then essentially this would mean that the eth1 blocks should also hit the same 12 seconds. Okay, bye bye. So I guess when you, when you call produce block or whatever it's called, you would specify the timestamp to produce it at, wouldn't you? Correct, correct. Okay. But this is more of, I think Michael's concerned like when you call produce block, you give the timestamp and that's fine. It gives you a deterministic result. I think Michael's worried about systems that are maybe dependent on timestamp that aren't right at the granularity of produce block like managing the memcool thing. Yeah, okay. So I guess the only thing I wanted to, the reason I kind of got hung up on this and I want to emphasize is that anything that transaction execution depends on needs to be crammed into the block header because otherwise we cannot synchronize best blocks. Right. So you can add. So we've, I think we've discussed it with Guillermo a couple of days ago that the original RPC APIs also had this random thing plus some second field. Right. Which in the past API, they were just passed along as two more fields independent of the block. And I just wanted to ask that if we want to ever add those fields back then we probably need to get them integrated into the header. And since with the header, I think with this minimal merge spec we nuked out three, four fields. For example, the mixed digest and others, we can always repurpose them to, if we want to have them in with minimal damage to the, I mean, minimal changes to the upon clients. Right. There's not really, because of the timestamp field, the timestamp field consistency with the slot can be checked on the consensus side and begin outside. So it's not really, I don't think you really need to get another field in there. I think Michael is more concerned about the execution engine knowing what slot it is without the context of being a new block being called. And so it can make decisions about things like the mempool. Yeah. So like my question was like, how do mempool transactions are executed against which block? Is it dependent block that is created and restored at each time after the new one is received from the wire after the new block is received and imported? I'm not following what's that the question? The question is, you have to validate the transaction, right? For propagating it to the wire, right? No, not really. So when you get the transaction, you only check whether the standard has enough balance with the analysis correct? Okay. And yeah, okay, I get it. And for the dependent block, it matters which timestamp is used, right? So for the pending block, yes, I guess the question is whether there's, so if you want to enforce this 12 second thing, then possibly it would make sense to somehow introduce the consensus engine. Everyone would do that the timestamps for the pending block would be again on this 12 second boundary. But I guess that's an important spec question. Yeah, I mean, calls to assemble block are only gonna ever be on that 12 second boundary. And so anything opportunistic like the pending block should respect that. And then there's a question of, can the execution engine just use a local time mod these 12 second boundaries? Or should it be told explicitly on like a click from the beacon node? Okay, new spot. Okay, new spot. Okay, new spot. So it doesn't have to worry about time sink issue. No, I think it's safer to just let the, you don't really care what the real world time is. You only care that it's in sync with your 12 second click. Right. And the pending block is either way just some opportunistic. Let's try to execute the batch of transactions and see what happens, but it's... So the worry would be if I had the beacon node and the execution engine on a separate machine and the pending block becomes, it's like one second off. And so it's a slightly different spot. And so when I actually call it symbol block, the pending block is not as useful to me. That would be the reason for the beacon node clicking or ticking on that boundary so that they would be in sync. Yeah, so I honestly think so in gap, if you are not mining, then you are creating these pending blocks. And if you are mining, then you are not creating these pending blocks. Rather, you are specifically creating mining blocks which are a bit different and handle differently. So for valid dates, for average nodes, they would just try to guess the next time slot. And they won't care about, they won't ever get calls to finalize something. And for miners, I guess for miners, you won't really vote at the pending block because you just wait for the next thing. Right, what is the pending block used for? When it's for non-mining nodes? Honestly, I think it's useless. Yeah, I was thinking that the pending block is used by miners. So the reason I would say that it's useless is because you have 4,000 transactions in the pool or maybe even larger if you count other bigger pools. And miners will pick a few. So your local node sees 4,000 transactions picks 200 to execute and then you can check the result. But even if you swap two of them which are doing some uniswap things, then you will get wildly different results. So I don't really think you can trust the results in the pending pool. It was somehow... How is this exposed to users today? Like the pending block thing? The pending block is you can just query the pending state. So you could, instead of forgetting the balance of the current status of the network, you can query the balance of the pending state. But as I said, it's not really useful. Right. The only reason... Okay. Sorry, just one more thing. The only reason we didn't really push for getting rid of the pending block is because it acts as this nice little caching layer meaning that I have my... I'm maintaining the list of transactions that I think will get included in the network. I picked 200 best. I run them as a pending block but there's a fairly high chance that out of those 200, maybe 150 will actually land in the next block. So by executing those 150, at least, all the storage slots that it touches are already hot in memory. Okay. So we keep... Love the pre-cache. Yeah, okay. So it keeps your cache a little bit more, a little hotter. But the... Got it. So if you want to keep that functionality, we pretty much just need to have the execution engine respect mod 12-second timestamps and then I think you get most of the functionality of today with no problem. And even then, even if you didn't, you probably get most functionality because most things probably aren't calling the timestamp output. Yeah. So I guess the only request that I would have is that if there's this specific behavior that every block will be on a five-second mark, perhaps just add it to the spec that this is to be expected, plus it is expected that pending blocks should behave accordingly. Okay, cool. Yeah, yeah. Also, I was thinking that pending block might be useful for applications that sends some transaction and just get read them from there, from the nodes they are hosted to send transactions. I mean, this pending block, through a dependent state. Okay, anyway. By the way then, what is the functionality that it's used for miners? Is it just creating a block from scratch or anything else? Well, Gap is a bit, that's a good question. So because Gap currently creates, during a single mining cycle, it recreates a block multiple times. First it creates an empty blocks, empty block, then it fills it, then it tries to create better blocks with different transactions and all of them can be mined. So this is with the proof-of-work method. With click, you just create the block whenever you get requested the block. So I guess for the ETH2 murders perspective, one option that we could do is to just wait for the ETH2 clients to ask for a block and then we just run the transactions. The only issues that then it will take a half a second or however much it takes to mine it to create a block from scratch or the other alternative is to try to prepare a few blocks in advance by guessing the timestamp. And then when you request it, we just give you the best one and we return instantaneously. Right. Yeah, I guess dislike time ticks will be like input for this kind of optimization as well. Yeah, so either work and if there were that like known five and a half a second delay to be expected then the proposer essentially would before they're supposed to broadcast right at that flat boundary, they would call it early to be able to pack the block. But if it's doing the prepacking and then it can call it later. Yeah, I was like thinking about just standard not only current timestamp, right? But also the timestamp of the next slot to fit this kind of functionality which prepared the block in advance. Okay, cool. Let's just, yeah, I'll think about it more. I mean, probably add these to the specification as a separate message. But wait, why would you need a separate message? So you are sending us new blocks anyway and the new blocks are supposedly on the correct time slot. So I can just add two seconds to that. It's not probably, no, it could be that the new block is like from the past. It's not like always. Yeah, but I mean, so if they need to change accurately tracks the plus second mark so the blocks are not a second mark then I can just calculate which will be the next 12 second mark based on my chain head or on the current time. So I don't think that's a problem. And again, there is one, you give me produced block request and I have to remake the block. If you make sure you also account for a capsules then that would probably work, yeah? Yeah, yeah, just based on time that would work. I mean, I would like add like a separate message which just sends the time, this time update. Just extra, so in the first round you would probably not even try to be smart rather just whenever the miner says whenever the client says he wants a block I can just make a block and low waiting, 500 milliseconds acceptable. Actually, is it acceptable? So if two clients wants me to make a block what's the procedure? What's the timeout? What is the expected propagation time, creation time, et cetera? The expected is to begin propagation at that boundary. And so sometimes there's like a little bit of pre-work done because you know that you're about to propose and then propagation should happen in that sub-second on normal operation. So if there were a latency is expected latencies in producing a block you would just start your work a little bit early. Yeah, but so let's say it takes me half a second let's say it takes me one second to produce a block how does that influence the e2 consensus? Does it matter if it takes one second or not? If I wait until the plot boundary and it takes one second as long as I still have one to two second propagation for the full network it's still fine. You're looking for like sub-4 second between when I'm beginning my job and when you get full propagation but if there were delays from getting the block that took a second then I as a block producer would just start my job early such that at the beginning of the plot I have the block prepared rather than waiting to the beginning of the plot and then not having the block prepared until one second later. So I mean, I don't think that it's a good idea to make the e2 client smart one. What I meant is that it takes one second is depending on how many transactions I cram in and it might take less or more. So I'm just asking about the worst case scenario that if I take one second what happens? Does that break consensus? Does that break block production or is it just a bit unpleasant? It likely is fine. If you're taking two or three seconds it becomes to not be fine. Why would you not take, I mean, I guess my assumption is what a miner does is they just continuously process, make new blocks and always whenever they have the block available they start mining on that. Can't you do like a similar approach that you start making blocks from maybe four seconds before your slot time and whenever you're done you start making the next block with the latest information and send the current one to the beacon node so that it can immediately make a block if it's... Yeah, that's fine. And of course that's the opposite place I was just referring to the case if I make a block with a certain timestamp and it turns out that the actual timestamp the validated request from me is different then I have to remake the block. Oh, no, but the validator would always request the block with the timestamp of the time when it actually has a slot like at that time it's deterministic. Like you already know the exact timestamp. You could imagine time think between the beacon node and the execution engine being three seconds off or something like that. Right, the beacon node just say what exact timestamp it wants. Well, yes, yes, yes. But if the execution engine was opportunistically creating blocks for the slightly wrong timestamp and thus the wrong slot, then once you ask... No, it shouldn't do that. I mean, what my assumption would be that beacon nodes knows a block is coming up tells the execution engine say like, I don't know, six seconds before and then the execution engine starts making blocks with that timestamp which would then still be a few seconds in the future but that doesn't matter. Sure, so in the current functionality you could just make the assembled block call multiple times leading up and just take the best one. Right, right, yeah. Later the last one you can get. The last one, yeah. That would give you hopefully more fees. But regardless, I don't think that we need an additional message that says, hey, this is the slot, hey, this is the slot. Yeah, yeah, yeah, right, I get it, I get it. Okay, so that's just, yeah. Okay, cool. And Lukas, there's definitely like things to optimize on this and think about the train indication. Do you have a second? So what I wanted to say is that I wouldn't put too much constraints in the spec on how much long it should take to produce the block. Okay, of course we can put some max value that we expect because I would consider this implementation detail that can also vary, for example, on hardware. So depending on your hardware, it can take longer or shorter to produce a block. So if I was implementing it to, I would do like you suggested, so ask for a block as soon as I know, we can ask for a block and then re-ask for a block if possible. And that's how I would advise. Well, instead of having a single method, saying give me a block and then the one client has to scatter to make a block, you can split it into two methods, just calling it prepare block, which says I'm going to ask for a block with a specific timestamp in the next whatever time. And then the each two client can try to make the best block possible. And then when you actually request the block, I will give you back whatever the best is I have. But that's right. Instead of having to poll on that one method, you just say start working, I'll ask you in a bit. Yeah, so the problem with the poll is that you ask for a block, but I don't know, should I make better ones? Should I stop when you request once or twice or 300 times or what happens? Polls are a bit unpredictable. If you make two calls, then at least I know that, okay, I gave you my best block, I can throw away all that scratch work because it won't be used anymore. Yeah, that's interesting. Maybe reason why. I think you can reproduce that with call as well. Like you just like the if to know just calls and then whenever it gets a block, it just immediately starts the next request and uses the last one it got from that sequence. Right, the FQC engine doesn't know when to stop the optimization. Well, it would because you stop making, I guess it produces potentially one more block than necessary, I guess, like that would be. The only downside, but that doesn't seem huge. I don't think that's the case. It's a constant six, yeah. So currently what get does is that so when I start mining proof of work network, I create a block, I give it to the miners to start crunching on it, but then some more transactions arrive and I assemble a new block that's better. So I gave that you give that new block to the miners and then some new transactions arrive and then I create a third block and I will keep doing this until something comes in from the network. And if nothing comes in from the network, I will create a gazillion work packages until something gets mine. All right, so it's a continuous optimization. It's not just discreet, make a next block. Yeah, so essentially every time the transaction arrives, there's a possibility that I can make a better block. All right. That you need a signal to stop making new blocks. Maybe that signal would be a set head. That's what also works. Well, and one signal could also be if the execution engine is like more than a slot past the last calls for the slot, you know that no one can be asked them for it, even if there's like some sort of time discrepancy, but then you're starting to make assumptions about time and the relationship between the two, which is probably not great. Yeah, so I guess this is kind of an open question for probably for us. First spec, I would say that just ask for it once and if I have a block, I will give it to you and if not, then I will make one and give it to you. As long as it's fast enough, it shouldn't be a problem. All right. Yeah, okay, cool. Yeah, great. So it's now much more clear. This for me now. Okay. So I guess we can move on to the consensus engine, to the consensus layer. I have like a few things to discuss here and then there are any updates. Okay, so the first thing for consensus is that there is an idea of the improved transition process which is like basically we have a transition epoch and when the epoch happens, the consensus decides on what will be the total difficulty of the transition, the transition total difficulty. It could be done like take the current, take the difficulty of the most recent block, multiply it by 10 or and set this as the offset for this total difficulty into compute the total difficulty that will happen in the future. What is great about it is... Take the latest ETH1 data because that's known to be across the client. Right, that's what I was like going to ask which one to use because if we like take the most recent block, it will have to be some kind of agreed by everybody and that requires some additional agreement process procedure but we already have this ETH1 data voting. So that could be... You knew the first new ETH1 data after the transition attack. Right, so like when transition epoch happens, yeah, the first, yeah, so the ETH1 data that are in the state, right? We can use this block hash and get the difficulty and add the difficulty to the most recent block probably. So actually, why is this a good idea is because we have the exact point in time with no regard to what difficulty will be on the network and we have this kind of total difficulty mechanism preserved which has its benefits. And transition epoch is essentially that is a beacon chain fork because that's the point at which you change the data structures to support the execution payload even though they're empty. And so essentially there is a lead time, there's a fork that actually happens. The fork happens, the actual change and update its consensus code happens with a lead time before the actual transition and puts the new code in place and then the transition happens. And so doing it as a function of that dynamically I think makes sense because it also just removes like another thing miners can potentially play with. Like if 75% of the miners go offline, you know, they don't delay the transition but like timing and everything like that, that much. Yeah, so the open question here is that how to compute this transition total difficulty what to use? So we can think about it and get to this discussion. I will also think about how to do it like what potential ways of doing it we have with the relation to the inputs that we already have like in the beacon state the beacon block and those that we can get from the execution engine. Yeah, I guess the actual worst case in hard coding it rather than doing it as a function at this transition epoch is that you the beacon chain fork that adds a new functionality like if you set the total difficulty say three months ahead and miners actually sped things up which is obviously difficult and unlikely but if they sped things up and made the transition total difficulty happen prior to the actual forking of the code and this prevents that kind of crazy case from happening. Any questions to these transition process? Okay, cool. The other thing to discuss is the execution payload size which is like the biggest field here is transactions which has the mark size up to 16 gigabytes at the moment. This is because we have to like handle two different cases where there are a few transactions with like huge transaction data and a lot of transactions was like, you know transaction data, that's why this and there are two limits like basically on the number of bytes of each transaction and number of transactions. So that's why these 16 gigabytes is theoretically possible and the potential... Can I have some context? The FSE lists have a max size because this becomes into play in the structure of the Merkelization rules and like the structure of the tree and so max like these things all have to have a max size and that's when you take the max size, the byte payload and max number of transactions currently then you get some ridiculous numbers like the girl says. So this might be a bit unrelated but maybe not. So that peer to peer itself also has cap on the message size that cap is as far as I know 16 megabytes but at least GAT limits the ETH support packages to 10 megabyte. This means that if somebody mines 11 megabyte block then GAT will not be able to propagate but if somebody mines 20 megabyte block Ethereum one clients will not be able to propagate it with the current specs. That doesn't mean we cannot update it, fix it, extend it. It's just a mental note. Got you. Yeah, I think like this is the way to limit this kind of stuff like on the network, like by just limiting the gossip message size. Yeah, I was gonna say on the big block gossip limits, you can, or gossip validation condition, you can definitely handle it there based off of maybe a function of like gas on that stuff. And we already have this kind of limits right in the gossip. I mean, we do have validation conditions there and you could add this very easily. One more thing to keep in mind is that at least with the ETH1 network, we've kind of seen that unless you have a very, very beefy connection, AKA Amazon, you have, so for Snapsync, we are using half a megabyte packets and I can request packets from quite a lot of peers simultaneously. And actually we've managed to overload the local node with, so we've managed to have timeouts not because the remote node isn't sending us the data fast enough rather because we just overload our own inbound bandwidth with data and it just takes that much amount of time to get it through. So in essence, what I was saying is that once you get to this half a megabyte message size, if things get funky. So again, I don't know what the long-term goals are on how to scale things, but we also probably need to keep in mind that network messages should be somewhat meaningful in size. Okay, so the option here is to limit it on gossip also the guest limit should work, but I don't think like this is the guest limit will be any way checked after the message is received. And if there is like a 16 gigabyte message, nobody wants to download it. So it makes a lot of sense to reduce, to just refuse this kind of thing, something on the gossip network stack. Agreed. Okay, cool. The next thing is specific to the structures, to the execution payload, we have the where we're going to have like multiple transaction types, right? On the main net or we already have them since early. So like the default option for the consensus side is not to cope with these different transaction types and just use this Opaq Opaq transaction approach which is just the representing transaction as an RLP string. And just which is working from consensus 10 points is just a string of bytes and have like this introduced, this is what it's already done, but we can also introduce the union type with like Opaq selector, which will allow for now only one type, this string of bytes, but will give us some forward compatibility with the next updates when we decide to like stem from a Paq transaction and have them explicitly in the execution payload. That was the idea, right? Yeah, that's the idea. The idea being that you can, have you had transaction type structure than the FZ payload and get a little bit nicer proof structure rather than having just the Opaq RLP by load, excuse me. But that first simplicity, we can do Opaq selector for now and then in the future deprecate Opaq selector with specific collectors. I think this was an idea from Proto. Proto, do you have anything to add? All right, so the current as a Z-spec defines a union type. We do not use the union type, but we can still improve it. And what we would basically do is define it as a single prefix byte to the transaction. And then we define a single selector for the Opaq transaction, for all the existing types in their encoded form. And then I'm talking about the envelope, including the inner selector that applies to the if you're one data. But then outside of that, you would like this structured data for nice mergel proofs. And for this, we would like to define other options in the union that are more structured with as a Z. And then we get this second byte that's also kind of like a selector that applies to all the new types of transactions after the merge. So I think we just should do this at some point in time. So I don't think anything to discuss with this regards here. So if anyone wants to, if anyone have any opinion, just let's discuss it offline. And like the last item is the union to 56 in the big change spec, which is used for total difficulty, which is like now, it's about 72 bytes. I don't remember value, which is like just exceeds the union 64 and we have to use something bigger. So what options here first is not eliminating it at all. And just because it's not used in any arithmetics except for comparison. So it's just this spec compares whether the transition total difficulty already happened or not. And yeah, that could be handled. The other option would be to, I don't know to denominate it somehow, but that would probably require some denomination happening on the execution engine side because it returns the total difficulty. I don't think it's like, probably it would work, but yeah, it just requires additional work. Not sure which way, like it's better. Sir, I think I missed that. You're looking for an encoding for big integer in EF2? Essentially, we've avoided big end arithmetic in EF2 on the node side so far. And right now with total difficulty, there is a big end all of a sudden. Right, but it's not gonna be encoded. It's just, you know, we got from the execution engine compare it to the constant. Yeah, I mean, it's not gonna be encoded in SSD structures. Sorry, I don't understand. Execution engine returns somewhere total difficulty to the consensus engine? Right, this is required for transition process for transition procedure. So the transition like happens once the certain total difficulty is reached. And right now, the beacon node literally just does not have big end arithmetic. And so you could, the total difficulty could be denominated in a UN54 and take off a bunch of the precision. And you'd also have to have a function that returns that with the less in precision. Yeah, so like the question is how difficult it will be to implement in 256 on the beacon chain side. That's like, if it's not too difficult, I would not, I would like to leave it there. And if the client wants to pick up. Teres from prison here. It's not too difficult for us to change. And we do use speaking as some places. Yeah, for tech, I don't see it being difficult. We already have a big end for if one, so we can change it. Let's ask Lighthouse folks too, but let's just operate as though we can do a big end comparison for this one little thing unless we hear otherwise. Yeah, great. Yeah, so let's just keep it as it is. And if there will be a problem actually, we can change it. Okay. Well, with the proper encoding, you can do a byte through byte encoding comparison. If it's just for order. Oh, yeah, right. Yeah, but you will receive it from like in JSON format, I guess. Yeah, but you can, I see. You can fill it. If it's hex encoded, yep. You can like even compare it as like lexicographically, right? If it's like. An exact decimal form maybe, but otherwise more tricky. Okay. We have like 15 minutes. Let's go to random updates. So Proto, do you want to, sorry? Sure. So in the past week or so, we have had a few of these offers, our type of calls, which are more casual calls where you can stay in sync with the very bleeding edge of Rayonism. I'll give a summary of what we have done so far. So we looked at the first definite in how to prepare the genesis. And then also chatted with a few clients on like how we move forward with the RPC. And so we have this one genesis tool ready to go to prepare a dozen with. We have a guide for everyone would like to set up their own test notes, how to use this kind of thing. I think we should basically try and focus on the RPC, on the updating to the latest spec. And then we're ready for the first prototype definite. Thanks, Proto. I would just go through like client updates on where everybody on the regard to Rayonism. So maybe we can start like from Geth. Well, I kind of gave an update at the beginning of the thing. So essentially the first version was, the decision was that we're keeping Guillaume's API updated to, I mean, it would be changed and updated to confirm to whatever spec the current API is. But otherwise it will still be based on directly just injecting data into the chain. Anything else? Yeah, great. Nevermind. So we have an initial implementation that I am currently testing. I hope I will finish testing and stabilizing it by tomorrow. And if any of ETH2 clients would like to participate in testing integration with the RPC, please contact me. I would be very happy to work on something like that, for example, tomorrow. So if anyone is available for that, that's great. Probably have any guide how to run the other mind in Rayonism mode. Yes, I can write something tomorrow, but I would like to just experiment that I didn't like miss something in this pack and I can communicate with an ETH2 node if anyone has this kind of test set up or something. Cool. Yeah, great. So actually work on like Techoo and I'm gonna to, it's gonna be ready tomorrow. So I guess I can experiment with Catalyst with the other mind as well. So just reach out. Yeah, cool, thanks. Anybody from like OpenTher and TurboGaft, Bazoo, I know Bazoo is starting to work on on this pack as well. Okay, cool. So yeah, let's just go to the consensus clients. Clients, we can change clients too. As I said, that I'm working on Techoo should be like ready by tomorrow, I guess. We'll test with Catalyst first, then try another mind, probably anybody else, TurboGaft, I know what their status is. So, what about Prismatic? Yeah, I'm still not much progress on the API side from my end, I'm still reviewing the changes. So I think once the API becomes more formalized on the E-Point side, it probably take me a day to catch up. So that's not too bad. Other than that, we build a faucet and for our renaissance, it's fully configurable. It comes with a ready React and Angular project as a reference, it's also authorized. So hopefully that could be useful. And it will also create a guide to start, to be a star person for our renaissance, so yeah. Yeah, thank you so much for this faucet. Serious, integrated, I guess in the first step, not. I'm just dropping the guide on how to run Prism. Yeah, that you have mentioned. Okay, Nimbus, Nimbus, do you have any updates with regard to renaissance? We're working on, we've just started working on renaissance. Now we have a PR, but at this point, we are experimenting with catalysts, but it's not clear we'll be ready for the first test. That's our goal. We still have a little bit more work to do in the RPC interface between Nimbus and catalyst. Okay, great. Anybody from my house, nobody's here. Anyone else want to give an update? Okay, great. Thanks everybody. I have a question, not an update if possible. Can you give a rough estimate on the dates and plan for the DevNet? Sure, so the original idea was to start the DevNet somewhere in the first week of the hackathon. Just, it's experimental short-lived and it's like, okay, if you join later or authorize. However, this is this kind of opportunity where we just look at, can we try the RPC in something more of a shared DevNet? And so I'd just like to try and spin up whatever kind of prototype we have in the next week or so. And I have this example configuration for the first DevNet up in the renaissance repository. I'll share the link again in the chat. And there a specify Monday as the Ethereum one Genesis and this can be skipped and then Wednesday as the actual Genesis. So there's this delay of knowing the exact Genesis state of Ethereum one. And then from there, you can compute the one for Ethereum two. And then on Wednesday, the actual chain event where the first slot starts ticking. But this is purely as example right now. I would like to confirm this and I probably wait for one or two more office hour calls to learn about the readiness of clients. Thank you very much. Any questions? Any more questions with regard to renaissance? Cool, any other discussions, questions or announcements and anything else before we wrap up? Great. I'm sorry for screwing up the call, this zoom link or fix that. Okay, thank you so much for coming. See you tomorrow, next week, next month, every time. So. Bye. Thank you. Bye-bye. Bye-bye.