 Welcome to awkward devs, one to five. A couple of things on the agenda today. Merge updates, updates on aero glacier. And then Ansgar has an EIP which would modify the EIP 1559 mechanism to better account for miss slots after the merge. And then finally, Dancrad and Guillaume are here to chat about vertical tries and the general stateless roadmap. And we can kind of get through the first three things and then give Dancrad and Guillaume the rest of the call to go over their stuff. Yeah, so first, any client team want to share updates about the merge where things are at any issues they've encountered. Yeah, I can start. So, we're currently looking into merging all the code from Emperor from from the interop. Nothing major came up. I've also started setting up a server for merge fuss and am fuzzing guess right now. But I'll add some other other execution there clients to it. For those who don't know much fuss is a differential fuzzer that basically just calls the engine API of two different clients and sees if they do exactly the same thing. Yeah, so that's that's our update. Did you find anything with the fuzzer so far. I've, we found something in the synchronization core. That was not final yet. But nothing really major. And Merrick, I think you were about to start speaking. Yeah, we continue clean up our merge code or we set up all consensus client in our infrastructure with nevermind as execution and Jane of course. We're in contact with Nimbus team because something not working with Nimbus and nevermind. Yeah, it looks like we are working after the Mario's test, and that is all, I think, from my set. Thanks for sharing. I see Andrew you're here and yet there's your marathon. No, not yet. Okay, I thought it was someone into this course. I don't think they had the real name but they were starting to implement the merge and asking a bunch of questions. Yeah. Oh, Danny you're going to say something. Yeah, I can give an update on. We are primarily the guy on myself and some others are quickly closing in on the final update to this layer specs execution layer specs and the engine API that came out of and for a, I might do a pre release later today and point to a commit on each that you pretty much what's happening, but we likely will finish more on Monday. There's a little bit more work to patch up between now and then. But but very close and then that would be breaking with respect to pit those and we would start kind of a new wave of test that targets in November. Got it. For context, if you don't have the context. It's largely similarly structured. There's probably a few edge cases that are patched up and generally simplifications and reductions in the engine API. So it's not not throwing a ton of shit out or anything like that. Got it. Um, is anyone from base you on the car. Yeah, basically awesome. Nice. Yeah, we don't really have any updates right now. We're cruising along we're merging in all the code from the interop exercise. That's that's still in progress. So, got nothing, nothing new or major to report. Okay. That's all Gary by the way so I don't have any hard details on it. No worries. Cool. So any, anything else anyone wanted to discuss about the merge. Okay. Next up is just arrow glacier. So on the last call we kind of decided for a block number and a delay for the difficulty bomb. I guess I'm curious to see how, how far along our clients and implementing this and like how realistic is it to have a release in the next week from client teams with this just so we could announce it about a month in advance. Yeah, I guess anyone from any client team want to go first. So we haven't done it yet but of course we can release it in the next week it should be very very easy. So, yeah. Okay, so next week is possible. Marius your first on my screen so get here. Martin implemented it with much into the coach and our next release should contain it. I'm not sure if we're going to do a release next week, but I think so. Ergon. Yeah, we haven't done it yet but we should be able to do it next week and should be. Yeah, probably will release it next week. Okay, cool. And base you. Same story here we haven't implemented it but we have a quarterly quarterly release scheduled on the next week. We'll likely have included there. Yeah, go ahead. Yeah, as far as I know, there are new difficult tests in the test repo. So when you implement it you can check out the new tests as well. Anything else on arrow glacier. And of course I guess for people listening and who are looking to like plan their upgrade. The fork block is going to happen. The fork is going to happen at block 13 million 773 0000. That's expected to hit around December 8. So, a bit more than a month from now. And one thing that would also be useful. I guess only get has an implement now but if somebody can just share the fork hash like the 21 24 hash so we can add it to the spec that would be great. Sorry what has the kind of fork identifier fork ID. Yes, yeah, it's in my PR. So, thanks. Cool. Anything else on arrow glacier. Okay, if not, then ends car has been working on an EIP which would basically modify the IP 1559 mechanism around the merge so that we can better account for miss slots in how we update the base fee. So I think he literally finished a draft yesterday. Do you want to take a couple minutes and start to kind of walk us through how this changes things and why this might be important to do for the merge. So should I shine my screen for that or just talk about it. Yeah, if you have anything to share real quick. I mean, yeah, just maybe visit that like two to three drops and then I might see a case in a screen. Yes. Yeah, so so basically the situation is just that with 1559 right now the base fee of course looks at the gas used in the parent block to determine whether the base you should go down. And with with the match, we'll have these usually regular slots every 12 seconds, but then if there's a missed slot of course that's basically means that there's like a 24 second window and just because transactions. So to accumulate right like you basically always expect a 24 seconds block basically to have one average twice as many transactions so miss slots would usually result in little base fee spikes and so the question was just a is this a big problem was it just a small annoyance and then be is there something small we could do to to to mitigate that and so basically this is just one proposal so the question is just basically the actions we could take you would be do this do nothing at all do something else, or just wait wait wait wait wait for Shanghai basically. Yes, so so the approach here is relatively simple it's just because like the kind of the core problem here is that we don't really account for block times. It adds like a simple block time and based rule to the to the calculation. Important to note that on a proof of stake, there's no way for for proper poses to manipulate the block time because it's like it's always enforced on the beacon chain side that the, like every stock only has one valid time stamp that it could use so there's no there's no way to do anything. So it's really like a very simple way. Similarly, and this has the same properties as the slot number but it's easy accessible from the execution side. And so, for your first why why why could it be kind of like important to do that with the merge so it's for one of course it's a little bit annoying with with these with the space fee spikes. So these are the problems. And the second thing that is a little bit more more important I think is that basically every lost slot is like a permanent lost throughput for the chain so if we have basically the slot is missed that means that we just have 50 million or whatever guys we have in the stock and just less of all throughput. And one I think I would argue that this is just not desirable because when we reset these throughput targets to reach them not to have to stay below them, but also more importantly this kind of like gives a more clear incentive to have this attack against public forces. So just because we don't have yet like these secret was elections. There are some concerns that potentially both of us could be the enormous and targeted in our service attacks and in the other concentrations every focus that you can visit the stop from from producing a block means the throughput of the chain goes down by that by that level, which kind of increased incentive and if we could mitigate that of course they would kind of like make in our services takes less useful for take us which would be really nice. And then the last concern is what are we describing here and they like degradation during contents issues so I mean of course hopefully this will never become relevant. But if we ever have situations where like a large number of valid if this goes off on at the same time. And if we work right now this is really quickly self feeling where basically you just, they have the difficulty adjustment and so while the block times go up for a little while they come back down and quite soon and so the throughput is only like impacted for a very short time. And if it was steak, we still have to sell feeling mechanism but it could take much longer right like, especially if it's less than a third of the change so finalizes. It could take a little months, even if we are like below the finalization pressure that we have difficult and in activity leaks, it could still like weeks so basically like, and of course we could then start to manually intervene and just increase the gas but again, also this manual attack intervention would take quite some time so it's much more permanent impacts to the throughput of the chain. And that is basically doing times where the stress on the chain is already at the highest because like we have these changes so so not only do we have like a period where there will be more in more activity because people will want to react to the consensus issue but it's also reduced throughput which is just not ideal for the stability of the chain. And so for this reason, I think it would be important to do this at the point of the matter already so we don't have a period of proof of stake without without this adjustment. So the specification that I proposed was like really motivated to be as minimal as possible so this is kind of taking the ep 1559. And just basically it's really if you look at it, it's just it changes like five lines of code or something it adds like these two constants just block them target and basically a maximum of what we want to allow. So basically this this just means that we basically allow up to 95% of the block to be used as basically like a as a target so say right now with the assistive to basically this would mean that like, you basically be allowed the gas tiger to go up to 1.9 times the block. And then the only changes in years is that we have this extra, sorry, it's extra line that does this adjusted guest target calculation, and then use it down here but it's, yeah, so this is basically like four lines of change so it's really minimal so it shouldn't shouldn't impact the much much much it that what it does do is though it means that now the basic calculation does depend on the grand parent of a block as well not only on parent because we need to access its timestamp. So that is one more block the unit you have available to validate the header that is a significant change. Right and that's basically that's basically that the one important thing is to maybe briefly talk limitations. So, because, basically, in the current in this minimal change, all you can do is really kind of account for more or less for one missed slot but as soon as you have two or three missed slot in a row basically, you just can't because we're only able to take two, there's just no way of kind of like recovering all these transactions. And so as you can, as you can see, I put like a little graph in here so basically like with the depending on the percentage of the process, the throughput of course does go down. The blue line would be what we have today, and then of course, like what I proposed would be like for the brown line here. So basically like as you can see we have a much more gradual decline initially which is exactly what we need for us protection right specifically there's almost no decline initially. And then even even in this situation like 2030 40% of what was offline we still have like much less degradation than we would usually have. But of course it's not perfect and so like the last thing is where maybe I would want some input is there some extension so you could you could make this much better basically much much less degradation until until you go down way way further. But those would require slightly more involved changes so this would add one header element. So, alternatively could also do this by accessing not only the parent and the grandparent of block but like the last 1015 2030 and ancestors but that also seems like a one more kind of like substantive change. And this year would be, if we were to increase the elasticity of above from two to two or five or three or something that would also help quite a bit. So maybe this might actually be feasible just because under state we have these 12 second block times as a minimum. And so it's already much reduced stress right now we could have block times of three seconds if we basically the randomness of proof of work turns out against us. And so the strain is already much reduced and approval for so I think there might be a case to do this as well. But again, objective was to keep the changes as minimal as possible so these are optional and additions. Right, I think that's basically all and so then just basically for context it's really just because as then you were saying we can. They kind of be specs for the execution side for both execution side and the other side, kind of are supposed to be more this final. Very soon. So if we really want to consider this for the match, which that would be the call we'd have to make very soon. And so, I think that would be appreciated. That's all. I have a question. Okay. Yeah, I had a bit of hard time understanding what you meant by the amount of service. How it improves the situation against an idle service. And as I understand it, and please correct me if I'm wrong. Like what if there is some transactions that for whatever reason causes a large majority of the nodes to process blocks very slowly. So the block time increases. I think it, I mean, it can be seen as if like 50% of the seeders go offline. Now, if I understood you correctly, the, the, what would happen is that basically would go down and the actual transactions included in the blocks would go up. Yeah, basically, it had minus go down and block times double than you would have doubled the amount of transactions in the block. So it feels like that would make it an idle service attack worse. Now, did I misunderstand something. So, I mean, extra. I think that's different. So, so basically that they're to do now was a problem for us as you're saying like transactions that take a long time to execute but the other one is to target specific book proposals. So you can like there are some worries that you can just basically based on message relay patterns you can you can de anonymize the like the API addresses behind specific validators and then because it's known in advance. So if you're trying to produce a block, you could like specifically target them, and just before that time and make them go offline for a short amount of time, so that they're not able to produce a block. Right, so the context, which you are discussing is the like east to world, the post merge. Right. So the context of the pre merge world. What I said, do, does it make sense or did they still misunderstand something. Right, so. Well, this was more small thing just it would not actually decrease base fee will just stop the base fee from increasing. When the blocks be like, if say like the only only every other block proposal. Well, okay, if you're talking about work, of course, right, if basically the block times start going up. And then that that would basically mean that the blocks were allowed to be more than half full on average. I mean, indeed, that if we have a proof of work attacks, similar to like this Shanghai attacks where we just have like very slow to process blocks. And that that would basically mean that the, well, it wouldn't actually change much it would just basically mean that like the new equilibrium would be a little bit different. I think say, I don't know if you, if you were saying your example usually like say if the block times would double. Now they would physically go up for X, but every book would be twice as full on average. It would still like reaching your equilibrium where basically, and. It would reach the new equilibrium though. I mean, if, if, if the throughput as in, you know, gas per second, well gas per block should be constant. And the nodes cannot. I mean, wouldn't this like form some kind of cycle where the block gets slower. And so we have more transactions go into it, and that makes it even slower and like some kind of self reinforcing cycle. And the last thing is that we just did that that can't ever happen because we have this strict amount of the city right so right now we have like a 2x maximum size, and we can just can't go beyond that. So even if we end up with 2x the box size. That's, that's the limit. So, so basically there's no, no kind of room for, for any of the cycle if you were to completely move the city, you're you're right that this. Alright, so it would do that to a point that there will be a steep verse and much more steep drop than at some point. Right, but, but again, I would, I like, and basically, I would say that that's what we have the gas limit adjustment mechanism for right like I mean, similar to what happened during the high techs that in that scenario you just basically advise minus to reduce the gas limit. Yeah, I was really sure how your proposal interacted with the gas limits. If it did. It does. It does not in any way change the gas limit. And I think that it's important because indeed like the best limit is relevant for security consideration so it just acts within the gas limit it just basically sets the gas target instead of always targeting the gas limits. It allows more than half a box to be targeted. And if box basically come in slower than expected. So, so let it balance out, but it never changes the gas limit. Yeah. Right. Yeah, okay. I think I leave it my concern, I think. That was the part I missed them. Okay. Yeah, I have a simple question why can't we iterate over these slots, treat them as an empty block and just use already existing formula. And with each iteration apply this formula to compute the basic view of the block that finally exists. So do you mean basically. Okay, so if we just basically insert artificial empty blocks into mist slots. The problem is that that kind of set basically just sends an incorrect signal so because empty block would signal that there's no demand. So, so that would basically lower the base fee. But it would end up resulting more or less in the same situation because you'd lower the base fee before the next block and then in the next block, you because it would, on average be twice as full, you increase it back up. But that just basically means that we have the incorrect base fee. It would end up in almost the same situation as my proposal just that these the slot after the mist slot, which is basically have artificially to will base fee and so too much extraction by the mind like P extraction by the minor. It just feels more complex doing it that way, because you still need some scheme like what happens of several blocks I'm missing and so on, and just breaking this one to one block correspondence seems wrong. But yeah, I also would say that these artificial blocks would be more complex. But I mean, otherwise, I mean, the comment was, I think, yeah, it makes sense right because like the effect would more or less be the same. It just, yeah, but again, my concern is just that we change the logic and it's, it's, it's from substance point is more complex than use already existing logic just applied like in and specifically, we don't So inserting empty blocks is also changing the logic right now we don't need to insert those blocks at all so we just need to apply these formula like 10 times, we, if we have 10 slots missed before this block. Right but the problem is that basically that means that the base fee goes down in situation which should not go down. But that's the main problem and especially if you do it iteratively you don't only don't don't know. I don't think that's true, I think it's equivalent right more or less like I think it's equivalent. Yeah. So I don't think that is right. It's purely about complexity which one we consider more complex. I feel like this is the better solution. Both are similar complexity pretty much if you do the different timestamp mod slot time and then run a formula that number of times versus doing a slightly different formula I think it's pretty similar complexity. I'm not arguing for one way. The execution client doesn't have the slot numbers. Right. That's the timestamp. So I said mod slot time. So, so right, block time at Target is proxy in here for slot time. Right, right. Okay, that makes sense. These two formulas are kind of equivalent. Yeah, cool. I thought that this like a bit, the mechanism that you have for those it's just a bit different from what I've been thinking about it. Why is parent, why is parent block time. Why are we doing that, like using a grandparent instead of just the diff between current block and parent. And just because like the way we like the basic equation just happens when we validate the base fee in the block but that means we have to look at the situation in the parent and then so this so within that we need the block time of the parent the bottom of the parent is just the difference between it's the grandparent and the parent. Because because the base fees never adjusted for the block itself it's always adjusted for the next block. So what would break if we just make it based on the time delta between the block and the parent instead. Because, like, I guess, the way that I feel softly think about this is that I think of the base fee update as being two separate updates there's a always positive update as a result of gas being consumed and the plot and always negative update as a result of time passage. So like in theory it shouldn't matter if the order of the two gets left. Right. That's a good point. I'd have to think about it. Yeah, who happened. Yeah, there is also the like miss lots. Before the parent block, like miss loss between the grandparent and the parent. Like no matter how you do it like every single span of two blocks gets a or every single span between two blocks gets counted once right so it should be fine anyway. Yeah, yeah, that generally sounds possible. Just again, I think, yeah, just have to think about it briefly just to make sure that there's no even a small within a block and since it's like the base fee in a block is not the one it should be and then even if it like basically this returns to the correct one in the next block. Ideally, you never want to have individual blocks where the minor and basically has it to load to a high base fee. Does the execution client know the slot time currently No, it could. I mean it knows the timestamp and the timestamp is ensured to be congruent with slot time by the consensus layer, but it does not know the kind of like beacon Genesis time and the slot per second, which if it did if that was just baked deeply into the configuration execution plan it could calculate the plot time based on timestamp. I would be a little hesitant to bake that in the configuration of the execution client because one can imagine a point in the future where we want to change that when having changed in those kind of stuff. It doesn't seem super valuable to be in there, although if you look at this PR, the VIP block time target is essentially a proxy for slots per second. And so if it makes it into the IP, then it is making its way into the configuration there. Yeah, that that feels You basically do need this if you want to do anything about miss slots on the execution. No way of telling. Yeah, I agree that it's necessary it just changes this from. Oh, this is a nice clean simple solution to now we're bleeding consensus layer logic into the execution client and that that's what makes me feel less positive about it. Yeah, fair. Yeah, so I guess it doesn't have to be that way. Can't we just simply say the gas target is per second rather than per per block and then this content concept would disappear. Right. Yeah, I mean, yeah, that means it wouldn't need an update if the thought time change in the future. You could argue it right now it doesn't need that update anyway. It's just that it was written as a portion of things. Yeah, I guess like one, one nice benefit of making it entirely time stamp based is that if we do change a slot time in the future we don't need to do anything else to ensure that capacity stays the same across the change. But if you do it per second it's just basically we still it's still baked in it's just hidden a little bit more because then it's in the elasticity multiplier because right now it would be two and then we'll be 24 and the 24 would just come from But the point is fundamentally fundamentally could the constant should be per second right I mean it's a, like, if we for some reason decided that, like, for POS has to be two times slower for some reason, then we would increase it by a factor of two so I would argue that the fundamental dimension of the constant is gas per second. It's the elasticity that you want on a per second basis. I don't see what it has to do with the elasticity, because the miss blocks are in the past and then the you still have future blocks coming right like so imagine you miss three blocks and you have like, you know a three times bigger block than because your elasticity gives it to you, you still need to be able to process that three times bigger blocks before the next block which you assume won't be missed arrives. So it's like you can't accumulate all the past misses that you've had. I think that if we ignore what we have implemented today and we just think like conceptually what what we want. What we want is we want the chain to have a certain number of gas per second. How many blocks per second are unrelated to that like fundamentally want gas per second, and the execution client does know, you know when was the last block and how much gas did that last block use and how much time has passed since that last block we should again purely theoretically ignoring currently implementation we should have enough information to do gas per second without knowing the future slot time without knowing what the intention intended slot time is we should have enough data to answer that question. I don't know how we convert in that scenario it would just replace the elasticity multiplier because we don't right now we don't actually explicitly set the gas target we set only the gas limit and set the system multiplier to calculate the gas target and if we would set the gas target on a per second basis and we would set the block gas limit, which we still need to know what the maximum block size is, then we would just no longer need the elasticity which would just be implied then. Yeah, so I guess I guess my argument here is weekly that I would rather see us come up with a larger change that makes it so we don't need to have the execution client know what the block intended block time is when for the merge like with the merge I would rather not have to leak information about block time into the execution client if we can avoid it. If that means making a larger change I think I would prefer that personally, like a larger change to this formula. I would prefer that over a simpler change that does result in a leak of information. So why is that a problem, I don't get that. Like, fundamentally. Why is it information leak. Yeah, like, I mean it feels like you that purity is. No, no, anytime you leak information you type you more tightly couple specifications and upgrades. Right, but I think I think they are that tightly coupled simply like I mean, it happens to be like easy because the current slot times are very close to each other was 12 and 14 seconds but if we had very different slot times we would have to make other changes. Like, I would just say like they sorry they are coupled. Lucas, you have your hand up. I will probably throw a wrench into the works but it feels like we are thinking about fixing a consensus problem in execution engine in general. The question from me would be, why the consensus, potentially cannot handle this like more like click does that you can produce out of order blocks if someone misses their slot or something like that. You can. You can. I mean, the way that the way that a proposer is selected is just fundamentally different than working purposes. And so, you could do some sort of backup model where it's not and if somebody doesn't show up in a second you could have somebody do a backup but that's still even if you did that you could have missed lots and result in reduced capacity. So it's kind of natural for the to be aware of. And scar. Yeah, I just wanted to briefly say with regards to the conversation about leaking information about the block time slot time into to the execution plan I don't actually think that this is avoidable just because even if we set the guest throughput per second and the block gas separately, we still want them to go up and down and lock step right so like, and because otherwise we don't have a mechanism and so we would still need to hard code the necessity and if we hard code the necessity of 24 between the two then we already hard coding again that this lot time just in a hidden way. So, why do we want to go up in lock steps. How would you how else would you ever was over the gas limit and the gas target up and down separately. I was sorry I thought you're saying the guest on the block time, when that increases you would want the gas limit to go up at the same time. I misunderstood. I mean, I mean actually like with the single like with this like just as right now and I don't think we want to have the EP that would remove the control by the local poser from at this point much so right now block and slightly notch the gas limit up in town right and if they continue to be able to do so we would hope that also be guest target per second would go up or down by the same like fraction. We could we could make it so. So this is where it gets into the a bigger change to avoid the data, the information leak, but what one can imagine a change where the way that the increase is now you increase the gas per second rate instead of the gas for block rate. And so the limit that would go up is the per second rate and so every proposer can do one 1024 increase or decrease in the gasoline or in the gas per second rate. How would limit go up or down. So just like the rates like if we want you know 10 gas per second or 1000 gas per second or whatever and that's our target, then each proposer can say I would like to increase or decrease that by up to one 1024, which would be functionally the same as them currently increasing the block limits by one 1024. But what would be the block limit in that work, like what the book what would be set separately with the just be automatically part of that or the block when it would be the block that would automatically be calculated. But if it's automatically calculated by 24 times that number. Yeah, if the rates goes up, then our, if the rate goes over the slot time goes up, then it means that the block limit would also go up and that isn't necessarily what we want like we may want blocks to come every. I don't know. I'm gonna stop talking for a minute and what I think. Right. And this to briefly put, maybe probably out here. I think it's probably not not ideal to just talk about these details much yeah I definitely think that like this is just a specific proposal I just came out of me talking to a couple of people so I think they definitely other favors of this that would maybe be better to reach the same goal the more. Do we think a this is just necessary for the merge itself or could we just wait for Shanghai to come up with like a really solid and well worked out solution. And if we think we should want to do it as part of the merge. Maybe just talk briefly about how we go from here and then do the actual discussion offline. Yeah, that seems reasonable. I guess does anyone think. Yeah, we should not do this at the merge like is anyone strongly opposed and there's kind of the weak objection around like the information leak that Micah just posted in the chat but like assuming we. Yeah, I guess that aside does anyone feel like it's not something we should do. I said before I'm not convinced this, this is an execution plan problem. Right. Right right so but it could be on the consensus airside sure but it's, it's, and I guess the trade off there is obviously you know, if we do it and it's non trivial it does add some more related to the merge. But it does seem important. So the clarity is kind of seem worth exploring more. Just to speak to that there's always additional things you could probably do on the consensus layer to try to avoid miss slots, but there is just a stronger notion of time and there is a notion of something not happening during the time even if you do short up in some ways and so there is this notion of like you don't have miss slots, and I don't think that's going to go away and that's the EVM can either react to that or not with respect to its capacity. Right. So there are certainly concerns with the consensus layer and you probably want to make sure that try to make sure that slots aren't miss, or there's recovery in the case of seeing this but the box will always be able to be missed. Does it make sense to maybe just schedule like a breakout room for this sometime next week or the week after something like that it does feel like we probably want to think through the like design space and like come with some proposal that at least you know. Yeah, we all agree. It makes like the right try deaths. I definitely suggest this coming week. I do think that this, if we do anything to the EVM that this is the thing to do with the merge. I do think that 10% of block proposals going offline because of some reason or other is is like totally something that could happen and having reducing the incentive for that. And to be happen from an attacker and reducing the impact that has on the execution layer and on capacity, I think it's very nice to have. And if it's going to happen, then we need to really expedite making a decision. And would it really decrease the incentive, like if I if I have a validator and I know that the validators right before me. I know the IP of the validator I can just toss the validator right before me, and then create it. Double is big. There's two types of, I guess, attacking those one is validator validator and the other would be external. And it reduces the external incentive to attack and it might actually increase as you noted the sum of the intravalidator attack incentives. Okay, so yeah, maybe as a next step and scar, can you maybe propose a couple times that work for you and awkward dance for next week and we can we can have like a round table there. Awesome. Yeah, thank you for for working on this and presenting it today. Okay, last thing on the agenda. I spoke probably take up the bulk of the call is, then crowd and the home have been working on stateless and vertical trial implementations that facilitate that and they wanted to share just kind of a the general roadmap around stateless and why it's important and kind of the solution space they're in and then why specifically vertical tries are an important step in that direction. That's all the two of you. Go with it. They crowd. Hey, I just shared my screen. Can you see that. Yes, we can. Yep. Okay, so I wanted to give a quick overview over the local try work for a winner on the score so that you know where we are and like why we are proposing these changes, and that are quite fundamental. And so I'll start by quickly like just giving a very rough idea on on on the whole thing. So, so what's a vocal try vocal stands for vector commitment and Merkel. And so it's basically a tree that works similar to Merkel trees, but the commitments are vector commitments instead of hashes and try stands for tree and retrieval so like just means a tree where each node represents a prefix of keys, which is already the case in the current Merkel Patricia trees. And so what does that mean. So when we look at a market proof I made an illustration here. If we want to prove this, the screen leaf, then when we go up the tree, we need to compute all the yellow nodes all the hashes at the yellow notes right. And for that we need to provide all the siblings of the green or yellow notes so that we can always compute the parent hash. Okay, if we change this here I showed like what what happens if we go to but for instead of the binary tree that I've shown just now for Merkel tree, then what happens is we've reduced the depth. But now we need to give three siblings at each layer, instead of the one we had previously. And so actually by increasing the width we increase the size of the proofs which is like currently one of the big problems with Ethereum state that Merkel Patricia trees actually with 16 so the proofs are huge. And so, how do vocal trees change this so here is an illustration of what happens in a vocal tree for the same situation. We again have the green leaf that we want to open. And instead of having to provide all the witnesses and in a in a good vector commitment in quotation mark. So like in one of the ones that we are going to propose. Instead of having to give all the siblings, which is happened happens when you use a hash as a vector commitment which is what more countries do. You only need one opening for each of these layers and and that opening is constant size. So that's why suddenly it becomes efficient to increase the width because the proof size doesn't suddenly increase but instead it decreases. If we have this very tiny example then then we have to just provide this inner zero one note as part of the proof and these two openings. As part of the vector commitment openings, and even better, typically we're going to use additive commitments so all these openings will collapse into one. So that's basically like a short summary, like you're basically what happens as you have to give this one in a note and one opening that gives a proof that leaf zero one one zero was part of in a zero one, and in a zero one was part of the root. So there you can see where vocal vocal trees get gained their efficiency it's from this property that you don't need to give all the siblings anymore but only like a small proof that everything is a part of the parent. Okay, so I made a short illustration here like on how how good they are. So basically like we are we're going to suggest that the proposed gas cost per state access will be 1,900 gas and at the current gas limit that would mean about 15,000 state accesses. And if you use a hex a Merkel tree, and then currently the witness sizes are about three kilobytes per witness, and that's 47 megabytes so that's absolutely huge. So if we change this to a binary Merkel tree, we would have about half a kilobyte per witness and then you but eight megabytes, that's still pretty big. Now, if we use a width 256 vocal tree instead with 32 byte group. And that's 32 bytes as it has now, but it's going to be a different type of commitments. Then it would only be about 100 bytes per witness and so that reduces it to 1.5 megabytes. And now we're finally in a range that we can consider is is reasonable and lets us do statelessness. The summary is on the cost so I made some estimates here if we want to do 5,000 proof so each each of the 25,000 openings that you would have to compute would need 256 times for field operations, each of them costs about 30 nanoseconds. So that's 750 milliseconds for such a proof. So that seems pretty reasonable in terms of proof of time. So that's the one that block producer would have to do. And the verifier would have to do a multi scalar multiplication, and the size of it just determined by the number of commitments that are opens. It's an MSM of size 15,000 for these 5000 proofs and that we estimated can be done in about 50 to 150 milliseconds. These are all estimates based on like the raw speed of the space operations so benchmarks are still coming in and we need more optimizations to actually get there. Okay. So, I'm going to come to the tree structure that we're going to suggest, and because that's important for the roadmap and why we are suggesting doing these changes in a certain order. And the design goals that we had in mind when we when we came up with the structure is that we want to make access to neighboring code chunks and storage so it's cheap. But at the same time distributes everything as evenly possible as possible in the tree. So that states and becomes easy. And these two goals might seem contradictory but the way we do it is that we we aggregate these neighboring code and storage slots into chunks of 256 and and only within these it's cheap and then these can be fairly evenly distributed. And then we want this whole thing to be fast and plain text, which means fast in the direct applications as we're suggesting it now. But we also wanted to be fast and snacks so we envision that within a few years and it will become very feasible to to compress all these witnesses using snacks. And, and for that, we optimized everything so that it can be done very efficiently and snacks. And that would also help anyone who designs rollups with us or anyone who wants to create state proofs and feed them into a snack. And finally the whole thing should be forward come compatible so we're basically designing a pure key value interface with 32 bytes per key and per value. So keys are basically derived from the contract address and the storage location. And what we're going to do is like the, the tool will be used to derive a stem and a suffix and the stem is simply pay doesn't see pay doesn't as a type of hash but when that's also efficient to compute in a snark of the contract address and the storage location except for the last bite right so we're excluding the last bite in this and then as the we put the last bite of the storage location directly into the suffix. And so, for any storage location that only differs in the last bite, the nice thing about this is that the stem will be the same and only this last bite will will differ. And then we're going to put them into a tree that looks like this, we basically have this worker try at the start that locates the stem. And that works very similar to like the currents, the current account try. And then at each stem there will be an extension node that commits to all the data in that extension. And so that means that like opening several points of data in the same extension, which we, yeah, or like the same suffix tree, which you also call it is very cheap, it's like the whole stem tree is already opened at that point there's nothing new to do. And you just have to open another point in these pre domials C one or C two. And so that is that is a very fast operation and very cheap. Yeah, so as a as a rough summary basically we have a huge reduction in witness size sizes. And it's about five times smaller compared to binary marker trees, and more than 30 times smaller than the current x-ray trees so that's pretty huge. And verification times are pretty reasonable like similar to a binary mercury. The overhead isn't huge. So like even in the worst case it's estimated can be done in a few seconds. And our solution doesn't need any kind of trusted setup. So it's all basic elliptic curve arithmetic which we're already using in Ethereum, just the discrete logarithm assumption. And I think it's currently the only non solution for Ethereum state that doesn't come with like huge trade offs in terms of how big the witnesses are and everything. So yeah, this was my quick introduction. I can't see any questions about it at the moment. I don't think there were any or there were a couple like technical questions in this chat, but I think they all got answered. Okay. Yeah, great. Yeah, I mean, Oh, go ahead. Yeah, so please feel free like if anyone has any questions about this like reach out to me I'm very happy to explain anything I also wrote a few blog posts about it. Yeah, it's obviously like some something to get into and it has a big change but I think it also comes with huge advantages so I also gave a peep and peep and eep talk if anyone wants to. I think it has some more details. Sorry, Daniel. I did have a question from chat. You said the max witness like worst case witness size. Do you have data on what the witness sizes would be with your current average main size. So that's not, so that's not technically the max witness size that is the, I mean that different different definitions of max like that is the, the max if you don't spam the tree. I think shared here was a rough estimate if you have like a block that does only state accesses for the whole block. But that's the slide right but it's not about. So if you spam the tree there are some worst things that you can do in all of the cases, obviously, I think the average cases a Christian be a couple hundred kilobytes. I have actually, and I wanted to share that as well. I actually made a little calculator, which I can share where you can play with these things. Sorry. So it's this one and basically how you can use this one this one uses all our suggested parameters. Just how many elements are in the tree. It will compute what the average depth is this is all the input forces suggested gas changes. And then you can enter numbers on what you think like. So this is an example where we access 1000 different branches to 1000 stems essentially 4000 chunks then we have like 400 update different branches update and 1200 chunks updated. And it gives a number here that would be the gas cost for this. So this, this, this example would spend more than 6 million in gas dust on the state access so that seems maybe like a roughly reasonable average case I don't know, maybe even high average case I don't know people are going to spend that much I don't know how to estimate right now, because people obviously also going to adapt. So in this case, the total data so this is any any scheme has to provide the data right so that that's an absolute must, unless you snuck the whole execution you have to provide the data. The data size here would be about 200 kilobytes. The total proof size so that's all the commitments and the opening that you have to give an addition would be 110 kilobytes. So the total witness size would be 308 kilobytes so that this is like a roughly average case. So you can put us on the on the ACD call as a link as well so please like feel free to make a copy of the sheet and play with it. And yeah here are numbers on like what the prover time would be like 250 milliseconds and the total verification time and 64 milliseconds obviously all of these are estimates at the moment. Cool. Those benchmark those benchmark estimates are they on just like what kind of very rough class of hardware is that estimating like that's a that's a reason like Intel CPU basically like a laptop or like a server. It's not parallelized so it shouldn't really make any difference like it's single thread. Okay. Yeah. One thread on a modern Intel CPU ish. That's my estimate like as I said so these are basically based on what's the dominant operation for each of these things and like estimating how many of these are of these you need. I think these two things are like fairly. I can estimate fairly well. However, this does depend on eliminating all the other bottlenecks. So just to be clear here the prover time. This would only be the time to actually generate the proof. And I would say like it's very likely that most of your time will actually be spent like getting the stuff from the database like that's you still need to do that that separate. Well, and so the reason why we brought this to the call now is that what we want to suggest is that we, we basically have created an idea for roadmap how we, how we make Ethereum stateless. And the idea is this, the idea is to spread the changes over three different hard forks. And, and at the end of it, while actually like if you would not be stateless in itself but but it would would gain optional statelessness and I will explain in a minute what that means and the changes would be for the first hard fork which I suggest to be Shanghai would be to make the gas cost changes in order to like enable enable all this and the reason for making the gas cost changes first is, while one is they are relatively a bit easier to implement the database structure, or well, actually we need to make some changes to the database structure but the whole commitment structure. And to the most important thing is I think to give signals to like developers as early as possible on like how they, how they should handle state access in the future basically each month, where like state access remains cheap and everything remains as it is another is another month where like new contracts will get deployed and they will all depend on the current gas schemes and they will all be upset when later everything changes and some stuff becomes super expensive. Or they could have developed in a more efficient way so that's really annoying. So it would be just so much better if like as early as possible we could get them to the right numbers and I think realistically, let's be honest, the only way to do that is to actually change the gas costs. And so that is why I suggest in the first hard fork to make these gas cost changes. In the subsequent hard fork. I call it like Shanghai plus one year can come or whatever that is. What we do is we just freeze the current market practitioner tree route as it as it is exactly at that point. And we add a worker try commitment. And it should be an empty commitment and we'll just track all the changes from them on and at Shanghai plus to replace the frozen market practitioner tree routes with a worker try route. And the reason for that is that with this roadmap there at no point. But does there need to be an online recomputation of the state, like all the recomputations on database and commitments can be done in the background and that don't have to be done online. Okay, and so the gas cost changes. Okay, so Guillaume and well I mean it's based on work by Vitalik, but Guillaume has separated them here into a separate VIP draft. And the idea is basically this so I decide I like keep in mind this design for the vocal tree so we have this these different parts we have the stem tree. So basically, try to group things together that that aren't similar storage locations, and then extension nodes which is like a note repent representing like 256 storage locations that are close together. And so basically we have typically two different kinds of costs we have like a cost if you access any of these stem trees, and a separate course when you access chunks that are within a stem tree that you've already accessed. So within the suffix that, sorry, within a stem that you've already accessed in the same suffix tree. So, okay. And so the nice thing about that is that some things will actually get cheaper. And I mean that that's I guess like one good news for smart contract developers, not everything was every state access will suddenly become crazy expensive if they design things well, then, then they can actually save some gas. So I have here suggested basically these five different costs that depend on what you do. So basically if you access any stem for each stem that you access during transaction, you pay 1,900 gas. And for each chunk that you access you pay 200. So if you access a 10 chunks within the same stem, then you would get a cost of 2000 gas from that plus 1009 had for the stem so a total of 3900 is then in addition, like writing so for each stem that you write to you pay a fixed cost of 3000 again for each chunk within that stem you pay 500. So if you added 10 of these, then you pay like 5000 but you only pay this one once if they're all within the same stem. And finally, when you, when you fill a new chunk so when you, when you add another note here that has never been written to before, then you pay 6200. So like adding your state is still somewhat expensive. And finally, this may might be one of the most conversions of all the changes but I think it's overall just important and we should do it and figure out a way to do it, basically deactivating the self destruct. The way we deactivate this rename rename it to send all and it moves all if in the account of the target but it doesn't do anything else. So it doesn't destroy code it doesn't destroy any storage. And it doesn't refund anything for just for destroying those because they are destroyed. Yeah. So that's that's basically that would be the suggested changes for the Shanghai and hard fork. So, I mean, it's mainly gas cost changes however. And actually they do require. And that's one of the things why we're trying to introduce this whole thing early and get feedback on it. It will require changes to the database because we are basically reducing the costs for these chunk accesses. And that means if in practice they have they have the same costs still like as they have now. Then that's a dose vector, and that would be annoying so it would require that clients already make some of the adaptations to their database so that accessing the chunks within the same stem tree within the same stem would be cheap. And I think the reasonable way to do that is if people would just store this whole thing this whole extension in one location the database so that's the easy way like it's even if the suffix tree is full that's about eight kilobytes of data. And so that's not a huge amount. And basically every time you read that you just read the whole thing from this and and then basically because like, whether you read 32 bytes or like 10 kilobytes from this like makes almost no difference. It's always the number of IO operations that actually matters that that should alleviate that concern. So that would be the suggestion for the Shanghai hard fork. The next hard fork, we would freeze the market partition route and add a worker try commitment. So we would say like the current state route is frozen exactly as it is and no changes are made to it. We had this empty worker try route that contains nothing, but whenever anything from the status written to or even read from, we just transferred into the vocal try route. But that doesn't mean we remove it from the worker partition. We leave that as it is. And then in the background at any points between this and Shanghai plus two, you can make this background computation where you where you can recompute this MPT route as a worker route, and we would then replace the MPT route with a vocal try route. And so that, yes, so basically, you can do that either you can do that locally but it can be done in the background because all the data is constant is constant. So even if you have to access the same database, like you can do that in some in another process or anything, people can run that at any point. And so that's nice. You have several months to do this. And it could also be even a simpler solution for some clients to simply say like provide the converted database as a torrent, I guess there should be still a way to like verify it, but maybe not everyone has to absolutely do that. I mean the trust model if you just don't know that as a torrent is not really different from doing a snap sync so like as long as we have reasonable number of people doing this I don't see like that there's a security concern there. And if we actually do already have states expire we're ready at that point, then we don't actually need to do a database conversion for most clients because we can simply use that expire for the last step we can simply say, now on this first database is expired and then you only need to literally replace the route and you normal clients normal notes that don't keep all this old state, don't even need to convert it anymore they can just forget about it. And what's the result of this basically we get optional statelessness so at this point anyone can add block witnesses to a block and we have made sure that they are reasonably small and quick to verify and everything and also reasonably easy to produce. And so what it means is, we can create separate networks where we have status box for example one obvious one would be to simply use the what is now the consensus network the lippy to be network on which all these two clients run and simply distribute status blocks on it. But there could also be more experimental other networks that that do things with these status blocks. And, and then I guess like as an optional future thing or very likely future thing is like we can, for example, one way how we come to full statelessness is we just deprecate the all deaf P to P network say like consensus now 100% dust on lip P to P, and deaf P to P remains as a state network so anyone who wants to have full state goes on there and that's like a network that's mainly used in to get full state and lip P to P is used by all the consensus nodes like clients and so on whoever wants to get blocks with witnesses. Yeah, cool. Yeah, that's, that's my, that's my introduction of the whole thing. Maybe, and do we have any questions at the moment. So first question. So the change in Shanghai. It feels feels like maybe I'm missing something here it feels like that's a pretty significant change like changing database structure in a way that so that current execution clients can correctly calculate the future gas costs. I'm concerned that that may be something that they can't even do. Right. So to precise you don't need so you can easily compute the gas costs so that that's that's not the difficult part like all these costs couldn't can be. We can add that to a client right now to compute this correctly. Without database changes, you mean right. Yeah. So you can compute all that you need to compute the new keys and you need to have like some, some array where you, where you store like everything that has been accessed but that's, that's all tiny that's not, that's not a problem. So the problem is that we are making some things cheaper with these gas changes and that realistically requires some database changes in order to. Yeah, I mean I agree it as a concern. But I, yeah, that's also like one of the things I want to bring here for feedback. We can do it, Marius, I think is suggesting which is to calculate the new gas costs and use the higher of the two. So calculate what the gas cost is using the old database layout, and then also calculate what the gas costs will be, and then just use right of those two for each operation. Yes. The downside of that is of course we're giving nothing when they're giving nothing to smart contracts developers right to making everything strictly more strictly worse. Yeah, yeah. I'm okay with that I'm sure some people. So second question on the Shanghai plus one fork. If I understand correctly, every database look up will require two reads one to read the vertical tree to see if it's present, and then a second one to check the old MPT tree, and then essentially then migrated as well. Are we accounting for that in the gas cost the cost of the double read and migration, or are we just going to say that hopefully this is uncommon enough that there's not a tie by like triggering a huge number of migrations as a precision you don't actually need to go through the tree right you're just you're just going to keep a because the trees frozen you can just store the the data that was in the MPT as the key value store so it's still costing something but it's not as expensive as going through a tree. Right. So I mean I guess the point is also. Maybe to be clear here. I don't think there should be two databases right so at least the data only has to be in one database, like the the actual keys and values. Like that there should be maybe currently that's not the way it's implemented, but the right way to think about it is I think that date data and commitment scheme are two independent things. Basically, there are two different commitment schemes at that point, but there are necessarily two different databases. That makes sense. I see so the actual key values you're imagining aren't any third hair quotes database, and then these other two databases are just like. Basically, yes. And that database simply has like a little marker that says is this already in the vocal part or is this still in MPT. Add a curiosity for the client devs does that align with your guys's current database layout, particularly interested in Aragon since I know their database is very different. Well, in Aragon we have so called plain state, which is a separate from the hashed state required for the Melco Patricia try. So we already have that in Aragon and for us it will be relatively easy to implement this additional Velco try commitment. That also true for the other four clients three now. So basically that's kind of hard to say. I'm not sure about, you know, the, the training, but we do have the ability to kind of swap things out as far as the underlying data structure itself so that's a real tough question for me to answer right now, but we'll keep an eye on it is one of the things we're trying to figure out is the same way Aragon does. So really that question, will we need to have gas accounting for the migration step since I'm guessing that's a non free operation like if something is the first time the first time you read something from the MPT tree and you need to write it to the MPT tree. Do we need to have that cost more gas than any subsequent reads, because there is you know database work this work. And I was worried that someone could, you know, manufacture a block that just does a huge number of migrations in one block and potentially blow things up. Right. That's an interesting question. Maybe I misunderstood but since the migration is happening offline. So I think you can get someone to do it for everybody else and share it. Yeah, I'm quite sure why. No, I don't think, I don't think the question is actually about about like the third step, where you say like, your place. Exactly. So basically like now we are in the state where you can access things that are in the market partition tree, you can access them. You can access them cheaply because they already like, like for example you can write to it without incurring the 6200 gas I guess. Yeah, it's a good question. And I haven't thought about I thought it through I don't know if it had a cast because he created this, the gas that yes and gas costs. One way to potentially think about it is to apply the costs from the perspective of the worker tree. So if we say this guy. Yeah. I mean, I think just naively seems like we can say like if you access something that isn't in the worker tree yet, then it's a right, basically. Yeah, basically, you charge right costs or maybe right plus read costs whatever it is. Yeah, that could be one way of doing it. Yeah. Vitalik, have you thought about it? I think he left. And I saw Andrew you had a comment about the target costs. Do you want to kind of bring that up? Yeah, just, I think, because the new target costs are already on average higher than the status quo so then, if we defensively make them even higher in the transition that might potentially be too too expensive for smart contracts. So I'm thinking maybe if like moving if having the new target costs requires some kind of database refactoring, perhaps it's still worth doing that and delay the gas costs, not to Shanghai but to a later fork, but probably to my mind is some kind of database layout refactoring is pretty much a prerequisite. That's my impression. Yeah, I agree. Yeah. We're about at time. What's the best place to continue this conversation then crowd. Yes, I think we have a channel for this on the R&D discord, do we? Yeah, we have a state expiry channel. Right, I think that's a separate so basically maybe some, oh, we have vocal tribe migrations. Yeah. Yeah, if you look at so basically I intentionally we intentionally designed everything so this is that it's independent from state extension so that all of these can be worked on independently and they aren't blockers for each other. Yeah. So I think next steps yes. So I mean, I'm very happy like if anyone wants to understand more and if they want to ask any question like this reach out. And also, I guess maybe the big, yeah, the big questions here like if would be discussing about these database changes that would be required for the Shanghai gas cost changes that we're suggesting. If we can discuss that, see where each client is on that and how big those are. Yeah, that would be great to understand. I think we can use just a Virgo tries channel here so, oh, yeah, I'll just type the name in the chat here in case people are not on it. Yeah, thanks a lot. I think we're at an indium for for sharing and obviously working on all this. Any kind of final questions. Anyone has any ideas and how we can do address state extension and that would allow us to prioritize state expiry and I think this process transition process is actually significantly easier if state expiry can be done simultaneously or first. Great. And there is also an address space extension channel on the discord. And I'll note before we head off, at least in North America, daylight savings time is changing before the next call. I'm not sure about Europe I think so as well. But please double check the time. So the call stays at 14 UTC, two weeks from now but at least in North America that's one hour earlier in your local time. And I think that might also be true in Europe. Yeah, so please just double check that before we meet again in two weeks. Yeah, thanks everybody. Thanks. Thank you. Bye bye.