 Okay, we are recording. So hey everybody, this is EIP 1559 call number nine, which feels like a lot. So I shared the agenda in the chat. We have a couple of things to go over today, a few updates from different folks, and then going over kind of the large state test net testing, and yeah, that should be it. I know Ansgar, I see you're on the call and I think you had to head out early. So do you want to start with your updates? Well, I don't think I've heard it earlier. I think yeah, that must have been someone else. So better than I'm sure to go first, but it sounds like maybe someone else has to leave earlier. Does anyone else? Just go ahead Ansgar. Yeah, I mentioned it to Tim, but. Oh, just okay, yeah. But go ahead. Okay, yeah. Okay, I think there might have been a scheduling conflict, but I kind of moved the other things. So, but the things, okay, so sorry. No, but yeah, I can go ahead. So basically, I linked like a little document again that was kind of the follow up of the previous one I had maybe two months ago or so. I think maybe, yeah, roughly two months ago. Exactly, that's a new one. I linked that yesterday. I'm not sure maybe it was a little bit late if people had the chance to have a look. But basically, so I've been thinking a little bit more about quite a bit more about like specifically the sorting aspect or like the sorting aspect in triumphs for 15, 59 transactions. And basically, as it said in the TLDR in the beginning, so basically when I first started thinking about it, just because I didn't really have a lot of background in 1559, it looked very much just like, now you have two parameters, right? The maximum total fee cap and the maximum minor bribe. And basically now you have to sort like in the two dimensional space and with all the complexities that came from that and kind of reevaluation every book and everything. But the more I kind of looked into it to me, it seems more like that it's really mostly about understanding kind of the very specific structure of like how 1559 will affect transactions and kind of optimizing the client around that. So what I mean by that is basically, if we look at like the mempool under 1559 in like a normal situation where it's not one of these extreme demand spikes that might be the rare kind of exception to these normal situations. But like in any normal situation, you usually only have like a handful of includeable transactions, right? Includeable transaction means that the fee cap of the transaction is above the current base fee. And the reason why you only have a few is basically mostly by definition, right? Because 1559 in the steady state blocks are on average just the target size. So they might be a little bit over, a little bit under, but like on average, they are only target size. And miners like always include as many transactions as they can up to like twice the size. So the fact that there's blocks in the steady state are only like one time the target size basically already indicates that on average, a miner that creates a new block only has enough transactions to fill like one time the target size basically. And so in the mempool, of course, that means that usually like the vast majority of transactions are actually like not includeable at the current price. And which is probably like a very trivial observation but took me a while to fully like get there and understand the implications because actually there's a couple of implications. So I think the more obvious part is for mining. So of course for mining, you only care about the currently includeable transactions. You don't care about what might be in a couple blocks. You're mining a count block and you need the transactions that are includeable at the count block. And so basically what that means is that the mempool ideally should have a way of only giving you the currently includeable transactions. And then I initially was concerned with really like designing a really an optimal algorithm from managing like a partially sorted structure that always gives you the highest paying transactions. But I think I outlined a design for that in the last document. And then I'm not sure if he or she they are on like, on this call, the min min now, the user like they kind of took this approach and kind of implemented like example sorting mechanism there, which was awesome to see. Yeah, exactly this one. But basically I think the main insight though on that specific topic is that this is probably an optimization that is maybe not necessary at all or at least not necessary in the beginning because what miners already do today at least, at least, well, I'm most familiar with the geth miner but what the geth miner already does is they get like a complete list of all pending transactions which and I can't like before 1559 that can be thousands of transactions and they sort all out, they basically sort them once per second or like, I think the default is once per three seconds or something and then they discard that and completely resort every three seconds. So basically they're doing much more already than they'll have to do under 1559. So under 1559 the kind of the effort for the miner only goes down. And so we can always still kind of optimize but it seems like this is really like a very simple, simple thing. And then the main change that I think is actually most worth talking about for miner then is not even the sorting itself but actually the how do you handle the stream of incoming transactions? Because right now what the miners usually do is they basically create a block and then they start mining on an empty block just to be able to start hashing immediately but then as soon as they have created a block they start switch over to that and mine on top of that block. And then again with this interval like once every second once I think the default is three seconds but you can run that up through once every second they basically create a completely new block because there might have been new transactions coming in since then. So they create like a completely new block that once that's done they again switch to mine on top of that. And of course you can do the naive thing and just do the exact same thing under 1559 as well. But if you look at what again the specifics of 1559 you usually like as soon as a block comes in that from outside that you wanna mine on top of or like that you wanna mine the next one for the chain and that will probably already use up most of your includeable transactions. So you'll start out as a miner with like only maybe zero maybe like a very small amount of transactions to include in the block immediately. And then over time while you're mining and you haven't yet found a block new transactions will come streaming in. And so then on average I don't know let's say every 12 seconds or whatever the average for non empty blocks will the mining effort will be on average it will take you 12 seconds to collect enough transactions for one normally full block. And then after maybe 24 times again if just like very simply spoken but like after 24 seconds you'll finally hit the cap and your block is like full actually full twice the size. And so basically what you ideally want to do is you will want to just like immediately as soon as a new transaction comes in just appendage to the block you're already mining on top of. And then maybe still once every second every three seconds or something you can do like a reorg where you rethought them in some other like by some other metric which doesn't really matter for the miner but maybe for the network as a whole it's nice if they mostly kind of prioritize higher pain transactions even within the block still or something. But basically there's immediate append action that's not done currently again. And that's something that I would propose to that that should probably already be done as part of like the initial 1559 implementation but it's even that part of optimization there is optional. And then again for the broader sorting side for mining it's really simple. I would say before maybe before we go to the other big topic the eviction that doesn't make sense so far. Is that do I maybe have something like what's there also? I think that'll make sense. The one thing I would say is I think first in first out is actually slightly healthier for the network than price based if there's no contention. And so my vote would be is until the block is actually full don't bother resorting to transactions. Like we have if we have no need to do it like it doesn't help the network really I don't think to sort by gas price because that just encourages more gas price auction stuff. Whereas if we're doing FIFO most of the time gas price auctions become much harder and much less profitable which is kind of good because gas price auctions are not the healthiest thing. They're a thing that we deal with because we have to but it'd be nice if they just kind of went away. And so my vote would be if you don't have a full block just do first in first step. And like you said just depend only until your block is full and then once the block is full then decide okay what am I gonna kick out of the block and then have to rebuild it and then at that point sporting might be the easiest way in which case sure go ahead and sort. But my hope is that if most blocks are not too X-full then we can do basically FIFO. Miners still gets all their payday and we don't encourage this behavior of people hammering the network with gas price wars basically. Yeah and of course I mean this is a little bit tricky because for miners these gas wars might actually be still be like beneficial because that drives up the mine upright. But I think at least right we need so need the functionality of the resorting as you were saying to use as soon as we reach the actual two X cap. So I think it probably wouldn't hurt to at least expose it to miners so they can optionally switch and do that immediately or something. But yeah I think at least the simplest approach would really just be first in first out until you reach the two X cut. But I would say this can probably be left to the individual client devs to just make the decision on. Yeah I think just to build slightly on that I think that I think most importantly is if we have geth do first in first out until two X full and other clients do slightly better sorting it would encourage more client diversity in miners. So basically it's a kind of like almost make geth not quite as good as the other miners because and also make it so geth doesn't do as much work to encourage people to switch to another mine and open Ethereum. So I hope another mine open Ethereum go and make a better sorting algorithm gives miners slightly better money then it'll encourage people to switch to another mines off of geth which I think we generally want we want more client diversity in miners I believe. Yeah I mean I can see the point there. I just want to just basically to towards give like the kind of the counter argument that I gave in the discord as well but I'm personally open on this question but I would just say given that 59 is already like a somewhat disruptive, well I think disruptive kind of overemphasizes it but like at least like a significant change for miners already. So I think it would probably be best to kind of keep the changes or yeah the changes to the minimum necessary to support the switch to 59. So this is why I might think that that maybe changing this might be something we should rather push then for on its own instead of as like part of the overall 1559 implementation but I don't really personally have a strong opinion there. Okay but yes so I think that this is like a in general this kind of summarizes the miner side of that quite well. I also like I talked with Gary from the geth team yesterday who does like the geth implementation and of the miner as well just to double check my assumptions there and it seems like this is indeed kind of how the concentration is and they actually interested in like looking into how they can adapt to 1559. So that would probably continue as a conversation. Okay and then the other part again probably that also is familiar with people by now that's of course the eviction side that's where you also need sorting to know on the bottom end of transactions which ones to get rid of if you're running out of space. And there the idea again if you basically think about the situation where in the and the normal circumstances most transactions will be like in this non-includable zone. And then you look at those in a little bit more like a little bit more closely. And so one thing of course is that I think it's rather likely that clients will do something similar as they do today with this like minimum gas price that they enforce and they just drop transactions that are below that. And so I can definitely see that happening also with the miner bribe where now in 1559 if your miner bribe is below some standard I don't know one way or something the transaction gets even dropped immediately by the mempool. I don't know maybe not maybe but I think that's at least like something definitely like realistic. And so that would basically put like a lower bound on the miner bribe that any transaction in your mempool will end up paying if it ever gets included. But then the interesting thing is there's also like some at least some soft and upper bound for what like an effective bribe a transaction will probably end up paying if it's currently not includeable. So if a transaction comes in and like the current base fee is a hundred let's say and the transaction comes in and there's like a fee cap of 200, right? So it's immediately includeable then it could potentially pay all the difference there all the hundred extra as a miner bribe that's possible but on the low side if it's actually in the mempool that's waiting to be includeable again at all. And this will that transaction even if it's like a really high maximum miner bribe it will almost never actually end up paying this whole big miner bribe because it will be included as soon as the base fee drops low enough that it becomes includeable. But and this is like a topic we have to talk a little bit about like how often do we expect like significant drops all of a sudden or something maybe with empty blocks occasionally or something but like generally a transaction will be most of the time become includeable just barely, right? Where the base fee just dropped just low enough that it became includeable and maybe includeable with this minimum miner bribe kind of the distance but that means that most transactions even if they're potentially willing to pay 10, 20, 30 a way of miner bribe if they're currently not includeable they're probably in the future if they ever become includeable will probably end up only paying a small miner bribe just because they'll only barely become includeable. And then again, the question is how often do we expect like significant shifts in the base fee? And so this is of course the details like how small this kind of the spend of expected miner bribes can be this is like it's still a little bit unclear or like there are different assumptions to be made there but the general insight there basically is transactions in the country not includeable side of things will probably all end up paying like only a small miner bribe. And so then kind of getting back to what I was initially in this first document and I think we also talked about on the call maybe like two calls ago or something what I was initially thinking about was kind of really optimizing for this expected miner value of transaction. So that like that took into account like the miner bribe and the chance of inclusion of eventual inclusion basically but now with kind of the miner bribe kind of looking more and more that like it's almost like uniform again within this small band and it turns out that this kind of mostly reverts really back to the simplest, the simple approach there that we started out with. So it's basically like just sort of making eviction decisions by a fee cap like just the total fee cap per guess that this really gets us close very close at least probably to like to this expected value to the miner. And so then of course they're still a case to be made to like really optimize to still kind of go beyond that. But I think this at least kind of makes it very kind of likely to me that it looks really like just going with a simple implementation here again is also just really hits this good enough for mainnet launch threshold. So basically like just have nodes do the simple thing and then once we actually have good data of like how does the base fee change how often do you actually get these significant drops? So basically how much of a difference is there between transactions that are waiting to be included that have like maybe a big smaller or larger maximum miner bribe and can we maybe do some more optimized sorting but it's really seems like just the very basic thing just sort by fee cap per guess that can use all the existing implementation. So it's like a very small change. It really looks like this might this will be good enough for mainnet launch basically. So yeah, so that's at least but I feel like this is basically the one question where I also feel like and I think Barnaby you were talking about that in Discord as well where we are how like this might be like a really good target for some additional simulation work just to see like under some assumptions. And again, of course these simulations are usually like whatever assumptions you put in you get out but still just yeah and a few different like scenarios how much of a difference is there that could potentially kind of be optimized for with like an optimal algorithm. But yeah, I think this is kind of my view on the addiction side. Is there a way to like, I guess is this spam proof in a way? So is there a way where you can constantly raise your fee cap? Cause like if you say had a fixed minor bribe and the base fee changes or whatever is there a way you can spam the transaction pool by just constantly raising your fee cap by like one way? Yeah, so basically I didn't put it on this document because I didn't I don't think it's like one of these kind of complex issues where there's a lot of open questions but I think for transaction replacement, right? With the same account, same nonce. I think that's kind of simply heuristic. I think that was already that we also already kind of talked about like as at least as like a likely candidate there like the simple heuristic of you have to have at least the same minor bribe and you have to have at least like some like again like a 10% increase of maximum fee. And if you just have then you have like a very similar situation as you have today but it's basically the same problem. Got it. He wrote that in his previous talk as well. Okay, yeah, yeah, yeah. Yeah, I just wanted to make sure that this kind of still held even though this like this sorting is quite simple. Yeah, but basically the idea that there's really just make replacement expensive or well so that you can just can't do a lot of replacement before you actually hit the includeable zone. So it's really like basically the idea is really like the network is mostly just optimized like just looking at fee cap per gas for all of these decisions for eviction for replacement for all of that. And then the minor bribe that's actually mainly just used then at the point of inclusion at the minor themselves for these decisions. But the network mainly just looks at the fee cap. I have a question. Yeah, yeah, I forgot to ask this morning when we talked but basically you are assuming there is only if 15, 59 transaction in the pool but what if there is a mix of legacy transaction and 15, 59? Do you, what is the ordering? Basically you consider the fee cap to be the gas price for legacy transaction. Oh yeah, yeah, just simply. Okay. Yeah, basically just like convert legacy transactions to this kind of 15, 59 format where the maximum minor bribe equals the maximum total fee cap. Yeah, and that shouldn't be because again either they just get included immediately and in which case again, they only have the issue that they might overpay which is just like the kind of the trade-off there or they end up in this not immediate the includeable zone where then at the end of the day like that they set this high and max minor bribe doesn't really matter because most of the time they'll end up only paying a small one anyway. So then they kind of are more similar to the to the native 15, 59 transactions than basically initially. Okay, that makes sense. But yeah, but what that means basically is I would strongly argue that mempools should immediately just implement full conversion of these legacy transactions and not have like two separate mempools. I don't think there's a case to be made for like really keeping two separate mempools. I agree. Yeah. This is great. Yeah, go ahead. Yeah, thanks. No, one question that I was just having was do we have, I'm not that familiar with where all the participants, oh, I guess, yeah. So the question was just, do we have someone from like some of the other clients? I mean, I already talked to Abdul about the situation in with Bezu but I would also be interested in like like how about other clients is basically like is the implementation there also kind of compatible with what I outlined here is like the other aspects that I didn't think of that might net and kind of meet adoption there, adaptation. But I can also like, that was like the one thing that I still wanted to do was just reach out there and see if there's anything. I wouldn't expect it because it seems or are the straightforward there but just to double check. But yes, I don't think there's- For the moment, the implementation is just taking the very, very similar behavior as before. So sorting the transactions by the value for the miner. So I'm listening here, but is it the final plan for the sorting and do you think that this should be implemented in the clients or because from the very beginning when we were talking about sorting in the transaction pool, I'm thinking of this is not part of the consensus, which means that it can always be implemented differently and according to the miner's requirements. So as long as this is most beneficial for miner, I think it makes sense and it'll be stable. But if it's not, then people start replacing it with whatever is most beneficial for miners, right? Yeah, I mean, so I definitely agree that it's not consensus relevant. And so I think it's really just important that there's like one good enough implementation that everyone can basically fall back on if they don't want to do something of their own. I would just be curious then for that, if you're saying optimizing for what's most like for the value for the miner, so like say any transaction that you're looking at that's currently not includeable, how do you calculate the value for the miner there? Because I'm not. When you say not to put a volume in by the fact that the base fee is above the block level, right? Exactly, yeah. So it's just a question whether you want to keep some of them in transaction pool while they're waiting for the turn on you'll just keep evicting them. So I know that we want to keep the transaction pool minimal and keep evicting the transactions, right? So I think for us, it may be even configurable by the miner. So like do you want to evict them or you want to keep them? Then the question whether you want to propagate them when normally they would be evictable, probably not. If other clients would evict them anyway, then you just don't propagate them and keep them for the miner. So when the base fee goes down, they can include them. Well, this makes sense from our perspective, but I want to read this much more in details what you wrote and like now with the approach that we have probably will start suggesting the users to pick the implementation of the transaction pool, one of a few and the behavior. So parameterize it a bit. One thing we have to keep in mind is that while not technically part of consensus, the various clients do need to have an agreement on what criteria will cause you to drop up here. And so we do make sure that we do agree on at least that part. So if you've got some threshold where you're like, hey, you're spamming me now, I'm going to drop you. We need to make sure we're all kind of, all the clients are in agreement that what the threshold is. So we make sure we don't like have one client that's saying, oh, this transaction is fine. Whereas in other clients like, no, that is spam and I'm going to disconnect from you. That's what we don't want. So not technically consensus, but something we do need to agree on before launch. Yeah. And I think that's almost like more of, you know, kind of an all core devs conversations in a way. Like to me, that's, you know, it is a threshold we need to agree on, but it's not going to block anything. Like the, you know, I don't think we'll ship or not ship 1559 based on the value of that thresholds. Yeah, I agree. Yeah. And also just, just to, oh, no, sorry, go ahead. I was going to say something to keep in mind is that we always talk about how if we, if there's a better mining strategy for miners, miners will switch to it. We actually have, you know, many years of evidence that suggests that's not actually true. Miners seem to be reluctant for unknown reasons to actually write code that changes their clients significantly. And we see that because there has been massive for years now, there's been massive opportunity for miners to take extractable value from DeFi. And no one has until the MEP people basically came in and did it for them. And so if we have a transaction sorting algorithm that's good enough, and while technically a miner could do something else, our evidence suggests most of them will not. And going back to the first in, first out sorting, if most miners are doing first in, first out, then it becomes very expensive to do gas price auctions because most miners will just ignore your gas price war and will just include first in, first out and you'll just waste money. So even if some miners defect and write their own clients that are more optimal, it won't likely affect like the network and people won't leverage that because you can't leverage it in this old miners do it. So just keep that in mind that like game theoretically, a hundred percent agree, miners should be optimizing these things. Our evidence suggests they don't for whatever reason. Yeah, but I feel like we want to trick miners into using something potentially suboptimal. And it's okay if you do that as a particular client implementer and it's your decision, but to try to spec it, which is against game theory and say, okay, so we'll just do that because no one will implement their own solution. I would leave it to client implementer saying, this is some suggestion for EAP-1559, but it's not part of consensus. You can do that whatever way you want. If some other clients will drop you because you're propagating transactions that are not nice for them. And obviously you will adjust because you don't want to be disconnected from the network and there'll be some clients that will have a bit of more to say depending on their network participation rate, right? So the market share. Well, that's the plan. All of this is not part of the specification. It's a set of guidelines. It's a soft agreement, but yeah, you are not obliged to do so, so yeah. Yeah, just briefly clarify. This does not propose any kind of rules that are purposefully suboptimal for the miners. This is all kind of trying to optimize, just sometimes maybe prioritize simplicity of optimization, but at no point, again, this was just like an additional proposal by Micah, where we were talking about how we could maybe get, could do some decisions specifically, even more simple or something. But in general, this kind of at no place kind of purposefully chooses the not optimal. Yes, I was referring to the Micah's suggestions and some of the trip he has also suggested before. The way like trying to decide for the users how they should treat the transactions in a transaction policy. I prefer the solution which is based on suggestion where we discuss it but not try to enforce it among the clients and just say, okay, so we discussed it, this is how it behaves as had works. This is not damaging 40.559, but we don't want to spend too much time on it because it delays us in the actual implementation. That's just how I see it. It's super useful, the analysis and something that I read with pleasure. I'm just here saying that we won't be necessarily saying that it's exact in the way we implement it. And we want to keep that freedom of saying, okay, transaction policy side of consensus logic, so we want to be able to do something that will decide will be most beneficial for users. Yeah, and to be clear, I definitely don't think that we should force anyone to do this. The reason I bring it up is just because one of those things where if we all happen to on our own right clients that's do first in, first out, I believe that would be healthy for users of the network even though it may come at a cost, a minor cost to minors. And so it's one of those things where we can maybe opportunistically get a free win for users. And to me, that's something we should maybe think about and try for, but I definitely don't think that any clients be forced to follow the strategy. It is non-consensus just a, hey, if we do this thing, maybe we get the free win and if we don't, then it refers back to, you know, what we're gonna do anyway. So it's not a big loss. Although just to mention, I personally, I'm not sure I'm fully on board there because I do think that this would then in turn just incentivize like kind of side channel communication with miners directly to basically pay for positioning in the blocks directly, which ends up being the same thing just with more friction. So, but yeah, that's a good argument. I guess if we take like a step back, like I think the reason we wanted to do this work is there were concerns that we could not find like a suitable way to sort the transaction pool that would be kind of spam proof under 1559. I feel like, you know, we're definitely at that spot now. We're like, you know, we have at least a solution that would not make the status quo, you know, significantly worse or cause like a security issue. Like, I'm, you know, I'm pretty satisfied with what we have right now. And like, you know, if different clients want to do different things or if we all align, like that's obviously has all like it's, you know, own set of trade-offs. But I feel like with regards to like the risk that having 1559 introduces in the transaction pool for dust vectors and things like that, I feel like we're in a pretty good spot with this, right? And then if different clients again want to tweak it, you know, I think that's a separate conversation. But yeah, I guess maybe my question is like, you know, does anyone feel like there's something more that we need to help justify that like 1559 is like sound and safe that's not, that's not being presented here. I mean, tone and safe under this aspect. I mean, I'm generally convinced that it's sound and safe, but of course this only covers the kind of mempool sorting side of things. Yes, yes, yeah. And then, you know, I think like we mentioned earlier, like we'll need to discuss also like how the clients want to do peer management and stuff like that. But I think to me, this was like the biggest like potential risk with 1559 if just like, you know, can we efficiently sort the mempool? Can we like, you know, can clients kind of manage your transaction pool without it becoming crazy? And yeah, I feel like I'm much more confident in that now. Yeah, and the rest kind of feels like an implementation detail at this point, right? Where like, you know, we can have long drawn out conversations about that, but it's not going to be a blocker for the EAP. Yeah, yeah, that makes a lot of sense to me. Especially when a block is not full, sorting still has a cost. So, you know, by appending only, you might, the miner might already, you know, find a block in that time. Yeah, and we see that on mainnet already, right? Like with empty blocks that get mine. So I, and I'm fine with like, you know, we don't need to solve all of the inefficiencies of mainnet to deploy 1559. We just ideally have to not make any one aspect significantly worse. Yeah. Yeah, I'm with you. I was a little bit worried about the sorting algorithm, not worried like as a blocker, just worried it was going to be hard, but that this research has convinced me that it's easy. Yeah, same. And yeah, again, like, I think, you know, thanks a lot, Ansgar for this. Like this is, this has really benefited from having somebody who's like, actually spent time thinking through it, deeply rather than like as a side thing on top of everything else. So yeah, this has been really helpful. Awesome, yeah. Thanks. And just briefly by the way to mention because I have it under further questions, just like one thing that I realized, like dealing with this is just, I think at some point, but I think that we don't, that it's not necessary before we launch on mainnet, but I think at some point, it's probably also like a good idea to get back to for one, to just the aspects of like, what are the kind of the things that might also at some point change with ETH2, right? Because we'll probably just wanna move this 1559 over and also use on ETH2, but that's of course a little bit further than the future, but maybe, I don't know, that might just be that 1559 only ships half a year before the merge or something. So this might become relevant quite soon then. And then also like the ETH1 specific topic. So like, how can we maybe long-term that doesn't have to be in the first release, but long-term kind of keep the base fee a little bit more smooth and stable even under like the kind of this distribution of block times, right? Because right now what will happen under 1559 is sometimes you have very quick blocks and that means that they're either like the second one is either empty or almost empty and then the base fee will drop quite a bit. Or sometimes you have like very long-time periods and then that looks like congestion. So basically the block will be doubly full. And so that there could be arguments to maybe at some point do like time-adjusted adjustments as well or something that it also take into account the time since the last block or something. But again, I think all of these are things that definitely not necessary for the first version. Just things to keep in mind. Yeah, I agree just in general that came out in the Tim Rothgarten report that like the base fee updates rule was simplistic to say the least. And like, it's probably not a blocker to ship it right now like, but there's definitely some room for improvement there. And I suspect it would be helpful to have actual magnet data before we were to tweak it because we know like what we have right now probably works but it's not optimal. And it's kind of hard to make assumptions about what the usage conditions will be before we actually deploy it. Deploy it. Cool, just because we're about halfway through already I wanna make sure we get to the rest. So any final questions on the transaction pool sorting? Okay. Yeah, again, thanks, Engsdor, this is really, really good. Thanks, Engsdor, thanks so much. Yeah. Adele, do you wanna give an update on the large state test nets and where things are at and how this could work with other clients? I see that like you're chatting also on Discord. Yeah, yeah, I will give some precision about that. Okay, so should I just give my update and the demo later or I do both at the same time? Let's just try to focus on the large state test net for now. Yeah. Okay, so yeah, basically we have a large state test net. The idea was to have something comparable to main net. So we managed to get the test net with 100 million accounts and 100 million entries in the state storage tree. And yeah, the problem is that we use the Bezu specific feature which is to have a fixed difficulty. And I forgot to ask if it was supported in Get and Netamind and it seems it's not. So I was asking basically Ramil that if he would be okay to hard-code it for the test net and I'm also talking with Thomas about that in the Discord channel. But yeah, basically that will be the only trick we would have to do in the code base to make it work on the state. And again, if it was easy to generate the test net again I would restart with different parameters but it would take two weeks to get the same state size. So I would rather hard-code the fixed difficulty value in the code rather than delaying the performance tests. Yeah, we are working on it right now and we met some other issues with base fee calculation. So we are trying to investigate it and so it's in progress. I will provide some of it today later. Okay, nice. And Thomas, would that work for Netamind? The hard-code difficulty like that or? I'm just implementing it now, actually. So we need to- Oh, okay, thank you. Thank you for that. And it's Genesis Block Handling and there are a few questions on Discord so I want to clarify of how to get it. So to, ideally, I would start syncing it today. And how long does it take to sync the test net? Well, it took roughly like something like maybe two days with Bezu, with a full sync, obviously. Yeah. I guess, yeah, the reason I'm asking is if then we want to do this kind of performance test, would it make sense to schedule another call and like do that synchronously with like, you know, all of us, or not necessarily everyone here, but just like get Netamind and Bezu. And I don't know, it's like a week from now, like good enough to fix the bugs and be ready for that or, yeah, because it feels like, I don't know, it might be easier to do this type of testing if we're all kind of on a call together. And, yeah. Yes, maybe, yeah. We can just, before that, once the nodes are synced, just I can submit a single transaction just to verify the consensus rules. And then we can plan a call to do the actual performance test. If that makes sense. Yeah, that works for me. Thomas, Ramil, does that work for you guys? Yeah. Sure, anytime. And this is how I'll be targeting today for the sync start, we'll see how it goes. Yeah, so, yeah, and, you know, so if say we get them fixed, you know, today or tomorrow, they sync over the weekend, then, you know, early next week, we can probably have Abdel just send a few transactions to make sure it all works and then, you know, schedule something for like maybe the same time next week or something. But I'll follow up on Discord about the specific times. But, yeah, I think that to me, this is like the last kind of big test thing we need to do on 1559. And if we can get that done, that's pretty good. And I have something else that's slightly related but just want to make sure is there any other questions or concerns about the test net? Okay, yeah, one thing I'll just share my screen. Vitalik, I asked him to do this a while back and he did this week, which was great. So the main reason for doing this test on the large state is like the fear that, you know, the large blocks cannot be handled on main net. And Vitalik wrote a small banner of like why he thinks this is probably not even a problem. It's quite short, I like encourage people to read it but at a high level, you know, the biggest risk is like, well, you know, clients maybe cannot handle 25 million gas blocks. And then if they could, you know, why wouldn't we just double the block gas limit and have them handle 25 million gas blocks all of the time rather than just doing 1559? And he basically explains why that's the case. So there's like three reasons why we can't just increase the gas limit a ton. The first is obviously the average block processing time will increase, you know, if all the blocks are much bigger. The second is that the risk of denial of service attack increases if the biggest block is much bigger. And the third is like this, the storage size growth rate will also increase if all the blocks are bigger. And one thing that he notes is that, you know, the first and last of these are really only impacted by the long run average block size rather than the maximum block size we see on the network. So that means, you know, even if we can't do those three things, given that 1559 doesn't increase the average block size significantly, but only kind of some blocks from time to time, the only issue we have to worry about is the second one. And then, you know, there's a few arguments why the second issue is maybe not that bad at all. The first being that 2929, which is going live into Berlin will help compensate for some of the denial of service risk based on storage access. So I think this is, you know, a very good point and also kind of a reason why it would be pretty complicated to ship 1559 before Berlin. So having 2929 helps with denial of service protection. And then the second argument is that obviously, if we have a short-term denial of service, it's less worse than if we have a long-term denial of service and 1559 actually helps here, given that the base sheet increases when the gas use is more than 100% in the blocks. This means that the cost to sustain a denial of service attack on the network kind of goes up exponentially with time, so, you know, realistically, attacks could not be very long. And that means that, you know, even if you could create a denial of service attack, you're almost in a better spot after 1559 than we are today. Where today the cost to do this is kind of a fixed cost. And then the third argument is that the block creation process itself today is Poisson distributed and that would lead to having two expikes happen on the chain roughly once a week just by randomness. This one I think is the one that's like the least fleshed out and it would be interesting to see data for. But just like with the first two arguments, it feels like, yeah, 1559 is like a net improvement over the status quo and the risks are probably pretty minor. So this is not to say we shouldn't do the performance test. I feel like we're basically there. But it's really good data, I think, to show that even if for whatever reason, we can't sustain like an hour of very high load or whatever, 1559 probably still isn't a major denial of service risk for main net. Generally thing that from E1559 perspective, the one thing that we should do now in parallel to the final testing and implementation is just to ensure that there is very, very active dialogue with miners about when the change will be introduced, what their current stances do they still have some concerns they'll like to address. And some clearly, clearly tell the community what the decision is. Are the core depths trying to go against any minor concerns? Is there any compromise? So how exactly the transition will look and all the miners, the majority of miners, what they say about the transition process. That is a giant can of worms. Yeah, if you don't open it now, it will open itself the moment when we'll start. It already has, I don't know if you hang out in the ETH R&D 1559, those three channels and then also on the ETH research, if you subscribe to that, and there's like, I don't know, three new people a day that show up and complain about their fees going down with 1559. And we try to talk to them. So far, none of them have been able to formulate like a solid argument besides I want to get paid more. Like that's the kind of the roots that I've derived from most of them is just like, I want more money. There are some other arguments that they make that they're worried that or they claim that mining will centralize in China. But I think that was actually debunked earlier. Like apparently I learned, I didn't know this, but apparently mining in Ethereum up here is to be mostly in Europe, not China. Which was surprising to me. But yeah, that's like the two things. One is I'm going to make less money, I don't like that. And two is this is going to lead to centralization. And the reason the centralization argument is kind of reasonable because the people, when the profitability of mining goes away, the people that have the thinnest margins are the ones that are going to get kicked out. Like the people with big fat margins, they're going to stick around and the people with very thin margins are going to get kicked out. The belief is, and this is a reasonable belief, that the large mining pools have the biggest margins and the small pools or single miners have the thinnest margins. And so if we do something that hurts miners in some way, then or decrease their profitability in any way, then the people who are going to leave are going to be the thinnest margins. Therefore the non pooled people are the non like farmed people. And that can lead to centralization. I'm not really worried about that because the Ethereum's price volatility, I think has a far greater impact on that than 1559 will, my suspicion. And we already have to deal with that. Like that's just the nature of mining. Is that it tends to centralize over time and not going to do that. Maybe the last... Sorry, go ahead. Sorry, maybe the last research that we want to have on paper is the analysis of what exactly is the expected the hash rate drop based on the responses from miners. Like, do we lose big miners? Do we lose lots of small miners? What will be some kind of documented hotel miners? This is what you were making before. This is what you're going to make now based on all the analysis. And then based on that, see what responses are and try to judge what the hash rate drop will be and how it will affect the security of the network. So I think there's a few challenges with doing such an analysis. The first is like people are obviously biased in their responses, right? It's like, before the change goes live, every miner has an incentive to tell you that they're going to drop if it goes live. And then if it does, they're not like bound to that. I think the other challenge is we don't know. There's no good way to model how much their revenue will decrease because we don't know the amount of like high tips that people are willing to pay for things like arbitrage and whatnot. So it could very well be that like, for most transactions, the fees go to barely anything, but for the highest paying ones, the fees still stay relatively constant and those were a majority of the transaction fee revenue anyways. I think with regards to like the security of the chain, a few things worth noting is like one, Ethereum is still like by far the largest GPU chain. And I think that would probably still be true if we lost even like a large amount of hash rates. And it might be worth finding out exactly how much. And that like is probably the biggest risk. Like we don't want to be the second largest GPU chain because then you open up a bunch of attack vectors. And then the other challenge is the hash rate is at an all-time high right now, which makes it obviously less profitable for miners given the increased competition. And it's hard to predict how the hash rate will evolve because there's more than 1559 affecting it, right? Like there's 1559 obviously, but then there's the price of Ether, which is a big factor. And there's also the willingness to pay high transaction fees, which is mostly fueled by DeFi right now. So I personally, I would just be very uncomfortable providing like an estimate and having people kind of make their decisions based on that because it's such a like dynamic thing. Yeah, that's I guess that's just my, yeah. Yeah, so that's the estimate in my sense, but maybe it's just kind of clear communication of this is what we expect to be the effect of the change on miners. And this is just to inform community and these miners are saying this and this and this and maybe collect like, are there any miners that's saying they're going to join because of that change because if the market changes, even as those statements might be biased, as you say, at least informing community what the collected opinions from miners are and what their stated actions are so the community can prepare for like to be to know at least whether to expect a turmoil or to expect this myth transition. Yeah, I think that's fair. I can definitely look at it, trying to collect some statements and just like list the general changes. And I guess the like philosophical argument that you like end up with is like, should miners be actively part of the ecosystem and like influencing decisions or should they be kind of price takers where like the Ethereum protocol has certain properties and then they can choose to mine it or mine another chain or not mine at all. And I think that's where like, most of the disagreements come in. Yeah, I think it's tough. I'm not having any statement on this one here. I'm just saying that we should clearly communicate what the process will be if the core depth say for this change, we take the stance that even as miners strongly oppose or if they do not strongly oppose the state even as miners I don't mind the oppose or so do not oppose whatever it is. Even then we decide to go forward to the change and then the community can expect that there'll be some potential risk around the change because the miners are not aligned with core depths, right? So, so keep in mind that historically it has been very difficult to get feedback from miners like you, you have like this nonstop stream of people coming into the channels and whatnot complaining and saying what their opinions but when you try to actually reach out to the big pools like F2 pool and Ethermine and Spark pool you get like no response. Like it's very hard to actually talk to the farms. Like the farms are non-communicative. I don't know why that is but historically that has been the case. Like we can get people who say they're miners to show up and talk but no one will admit to being a major farm or like a large-scale miner. I think at least in the past we were saying that Hudson was having a channel to connect to. Yeah. Mining pools and be able to collect. He has a channel to talk to them but that channel has historically not, my understanding is that channels historically not work for getting feedback from them. Like if you give them a survey to fill out you can just like one question, you just won't get anything back. Whereas... Yeah, I can follow up on that. We did... ...coming things. Yeah, we did get some feedback from, I think... So when we did the first outreach we got some feedback from miners. I think generally most of them didn't want to be identified which regards to their like entity. I might be wrong. Like some of them might have wanted but like the general sense is people like you know didn't necessarily want to be identified but I can reach out and see what you know people might be comfortable with and how we could aggregate them feedback. As the names of the mining pools that are mining blocks they are publicly known. So we can just provide a list to communities saying these miners decided not to comment on the change these ones are for against and we just clear on that there'll be like eight or nine entities of the pools or big miners than we see. Yeah. Assuming that we can actually identify who is behind the pools but I think this is not really a secret. Yeah, I think that's... Yeah. But I also think Tomas had a good point in just saying that like this is something we should definitely talk about at some point on the all code apps. So... Yes. That's a proper face. Yeah. And I'll try to, you know before this gets brought up on our core devs again to follow up about the miner conversation and both I guess both try to list, you know I think we can list the objective changes that 1559 will bring the miners, you know like you're going to get the tip but not the base fee and whatnot. And you know some like hypotheses about how this can affect them and try to get their feedback. Yeah. But also just to be clear, right? The merge is probably coming within, I don't know I don't want to be too optimistic but like six to 12 months after 1559. So this is the point at which we just sunset mining completely. So yeah, I think it is important to keep that in perspective. We're talking about the last six to 12 months of mining. Yeah. But I think I like Tomas' point where like, you know we might very well decide to go ahead even if there was say like 100% opposition from mining and just say that you know what people other people will mind the chain and whatnot and that's fine but at least we can kind of be clear with that decision and people can expect, you know some sort of potentially turbulent upgrade and that's very helpful for folks like say infer or exchanges or whatever to know. Yeah. So just like I guess to be clear, you know I don't think that because we collect negative feedback from miners we should like not do the change but we can make a conscious decision to do the change even though there's negative feedback because there will like there will always be negative feedback from any stakeholder group on any large change to the network, right? Yeah, I'm not saying here but like we've drawing from something that has quite clearly supported on the Cardiff's channels but also not to be quiet about the lack of support from miners and be clear with the community. And if we decide to say, okay we're going with it even against some opposition then we have to be clear and not try to just hide it and be quiet about that. So community needs to know because it may actually lead to some potential problems during the transition, right? Like some miners can be very adamant if they feel totally ignored and it's not even included in the communication channels. So I'm not saying that there will not be some heavy lobbying and campaign on Twitter and so on but the more you try to hide it the more of a problem it may become. Yeah, I agree. Cool. So the last thing I guess there were two more things I have on the agenda. So Abdel put together an EAP for the base sheet opcode and he also had like a quick demo for how to join the 1559 testnet. Oh, 2718 as well. I guess, yeah, is there anything else that people wanna discuss just cause we only have 20 minutes? I would like to spec out a 2718 sooner rather than later. I'm not a huge fan of letting it wait but if everybody else wants to continue waiting on that I will concede. So for 2718 if we have like the testnet test over the next week or two does it make sense to do 2718 right after that? We can do that in parallel I think because we have the other testnet for to do some changes and doing integration testing. So I will not wait for the performance results to start personally. Well, I guess, yeah, we can do it in parallel. I guess Netermind and Volcanize can you also do it in parallel or is it valuable to maybe just start with Basu and then add the other clients? Yeah, we already have the implementation for 2718. Maybe not with the latest changes that they're not fully agreed on but we do have an implementation. Okay. We can start testnet when it's ready. And did we agree on the transaction types we are going to use, et cetera? The value, the actual value? We haven't specced out at all for like someone who's submitted photographs, probably me, to 1559 that adds in 2718 integration and that's part of that. We'll pick transaction types. I'm going to most likely just pick whatever's after 10 and 30. I forget what number to use for them but we'll use whatever's next. Okay. That makes sense. And yeah, I guess if you can just re-share that PR Mika in the dev channel or something just so people can review it, that's probably a good way to start. I need to write it first. I've been waiting to write it until I'm ready to actually... I thought you said you had it. Yeah, yeah, no worries. Cool, yeah, so yeah, if you can get started on that then we'll review it when it's ready and we can get that started. Thank you. Cool. Anything else before we go to the base sheet upcode and the testnet demo? Okay, Abdel, over to you. Yeah, so the create base sheet upcode, so nothing crazy. We just want to add a new EVM upcode to get the value of the base sheet of the current block. And yeah, so I created the PR. So basically with the value and the proposed gas price. And yeah, I did the nominal case and we decided to do exactly what we do with unknown upcode. For example, if we reach an upcode that has the value of the base sheet upcode we just throw an invalid operation error like we currently do. So yeah, nothing crazy. So I created the heap and waiting for review. So Mika did the first review and merged in master. But yeah. So what does the gas price return and what does the base fee return? So gas price will return the actual price to the transaction, right? Yeah, exactly. And the base fee returns the current base fee in the block header. Yeah. Very nice. Yeah. So yeah, very simple. Yeah. And yeah, that's it about the heap. And if you're asked to do any client does here think that that EIP will be hard in any way or is that something that we can just kind of shoe in? Nothing will be super easy to implement at the same time if it doesn't happen at the beginning and gets not a big issues. Cool. And yeah, I thought you wanna share your demo for the... Yeah, sure. Yeah. Yeah, let me show my screen. Can you see it? Yes, we see the block that is that page. Okay, nice. So basically, yeah, this is a network status page for the new testnet. So currently we have four Bezu nodes. So we are close to one million blocks and yeah, as I said, we have 100 million accounts on this testnet and 100 million entries in the state storage. And basically, yeah, we implemented the tool to join the testnet easily. So the idea is to build a client agnostic tool but for the moment I only have Bezu implementation but the idea is that after that I can try to add the NetAmind and get support if you give me just some basic stuff like binary capable to do 1559 stuff as the genesis and the config file and then I could try to add the NetAmind and get support. And basically, yeah, you install this command line tool. Okay. And then you simply run the 1559 run. So it will download the config file templates. It will prompt you for a name to display in its stats. So for example, implement the course nine. And then the text that you don't have Bezu installed on your machine. Yeah, because by default Bezu is the default choice because I don't have the other client support for the moment. So I will install Bezu automatically. Okay. So it prompts the command line to run the node but if you press enter, it will run automatically and opens the network status page. Oh, okay. Well, demo effect. Sorry. Okay. So now Bezu is running and it starts to connect to the peers. Okay. The synchronization started. And yeah, basically if I refresh again, I have the new node and it starts to sync. So the first block are really big because I generated the tool basically to fill the network with a lot of accounts and a lot of entries in the smart contract. And yeah, there is another command also in this tool. Basically it's a simple faucet. So for example, I want to add some it to this address to play with it. So basically, yeah. Basically if 1559 faucet with the address and you will get 0.5 it on the testnet. Yeah. Okay. This is 0.5 it and yeah. So the idea is to have a simple tool to join the testnet and to have more users running a node on 1559. We also have a tool to basically submit transaction to this testnet because there is no implementation yet in the wallet provider. So basically if you want to submit a 1559 transaction you have an estimate button that will generate a recap for you and you can submit the transaction and you have the link to the block explorer. And yeah, basically you can submit a 1559 transaction. And yeah, that's pretty much it. Yeah, that's really cool. Thanks for sharing. Cool. Anything else? Anybody wanted to share, discuss, bring up? Okay. Well yeah, thanks everybody. I guess we'll follow up on the Discord dev channel to set up the large state testnet. Mika will also be waiting for your PR and we'll review that. Yeah. And then I guess we can figure out based on the testnet test when we want to have another call or follow up. But yeah, I think this is looking good. Yeah. Thanks. Thank you. Yeah. Thanks everybody. Thanks so much. Have a good one.