 Thank you. So yeah, welcome everybody to 1559 Implementers Call number seven now. Like I said, we have a bunch of things on the agenda. I tried to list them in order to go through them, so maybe we can just jump in. First item, Tim Rothgarten, who's joining us today, has put together a pretty extensive economic analysis of 1559. He published it, I think, two days ago, so hopefully people have had time to digest it since then. But maybe, Tim, do you want to take a few minutes to just give a kind of short summary of the analysis? And then if people have questions or comments, we can go over those. Sure, Tim. Happy to. Thanks for the invitation to join the call. I don't want to go on too long because I want to be driven more by people's questions, but I'll just quickly say kind of the structure of the report. So after describing, just recapping how 1559 works, so giving a fully precise, fully detailed definition of exactly how it works, the report talks about how to think about a market of Ethereum transactions. So EVM computation is a scarce resource. And so ultimately, users or creators of transactions are vying for that scarce resource. So ultimately, that's the point of the transaction fee mechanism to figure out who gets access to that resource and what the price is. And so the purpose of that discussion around Ethereum, the market for Ethereum transactions is primarily to clarify what 1559 can be expected and cannot be expected to accomplish with respects to the level of transaction fees. Because I know there's a lot of concern in the community about high transaction fees. And the main point I wanted to make there is just that when sort of demand for EVM computation prior to outstrips at supply, you're going to have high transaction fees. It really doesn't matter what mechanism you use. 1559 does help with things like absorbing short-term demand spikes. And so as a result, you should see a lower maximum transaction fees in periods of high demand. But again, when you have much more demand than supply, no matter what the mechanism is, you're going to see persistently high transaction fees. So then with that out of the way, then I start to analyze 1559 in two ways. So the most technical part of the report, sections 5 and 6, that's really analyzing the incentives of 1559 at the time scale of a single block. So thinking about, say, miners who only care about the revenue that they get from that one block and are not thinking about making short-term sacrifices to reap rewards later on. Similar users who are just focused on getting a transaction in the current block and are just trying to figure out how to bid. And so sections 5 and 6 outline several game theoretic guarantees that you might want a mechanism to have. So miners should be incentivized to do what you would like them to do. Users should be incentivized to bid in some sort of obvious, optimal way. And then also, you'd like robustness to off-chain agreements so that users and miners can't easily collude, for example, to basically steal money from the protocol. So those are sections 5 and 6 listing those three properties. First-press auctions, the status quo has two of those three properties, but 1559 has all of them, or at least almost. So in particular, those sections include a mathematical definition of what easy fee estimation might mean or what a good UX might mean. And first-press auctions do not satisfy that property. And the 1559 mechanism does satisfy that property, except during periods where the base fee is much too low, which would signify that there's been a very rapid increase in demand where the base fee hasn't had a chance to catch up yet. So those are 5 and 6. They're the most technical sections in the report. Section 7, I discuss attacks or manipulations you'd be worried about that take place over longer time scales. And so for this, you're usually thinking about a cartel of miners, because any one, or at least mining pools, because any one miner is probably mining blocks sufficiently and frequently that long-term strategies are not useful. But if you have a well-coordinated mining pool or if you have a cartel of miners with a large amount of hash rate, all of a sudden you start worrying about what they might do if they strategize over time. For example, could they manipulate the base fee downward to reduce the fee burn? And so section 7, from what I could tell, seemed like this was the one that's generated the most kind of discussion on, say, Twitter thus far. So maybe let me just sort of say what I was trying to say with the section. So the first goal was just to sort of revisit first-price auctions, the status quo, and ask the same question. So what could miners in principle do by colluding over long-time scales and what do they actually seem to do? And so there we identified collusive strategies that would, in fact, be in miners' interests if they implemented them. And then we observed that miners do not seem to actually do sustained long-term collusion. And I'm not in a position to conclusively say why that is. I just sort of listed a whole bunch of reasons that I thought of and that people have told me about here are the reasons why we might not see this kind of sustained collusion with first-price auctions. And then I go on to observe that whole list of reasons that apply to first-price auctions apply equally well to the 1559 mechanism. So there do seem to be impediments to collusion by miners now under first-price auctions. And nothing about 1559 makes it easier for miners to collude. Now, 1559 may make the miners more motivated to collude because now they sort of have this additional incentive of evading the fee burn. So the point of this section is just to say that in some sense, the costs of colluding, I don't see any reason why that would go down with 1559. It's as difficult as before. However, it is true the benefit may go up to miners of pulling off the collusion. And I try to be very careful in the report of not predicting whether we'll see significant mining collusion or not. And the final section of section seven, the caveats explicitly discusses this point, that miners may be more motivated to collude than they ever have been before. And so in particular, there may be types of collusion we have not seen under first-price auctions, which we will see, not because they're easier to pull off, but just because they're more motivated to do it. Section eight is something I thought would generate a little more discussion than it has thus far. So I think, so the first part of section eight is just to clarify that you can't really do sort of the fee burn without the base fee or vice versa with an exception. And so this is section eight three. This is one of the two alternative designs I discussed in the report. So the first alternative design is, so it's really crucial for the game theory, the role that the fee burn plays. What's really important is to withhold base fee revenues from the miner of the block, which generates those base fee revenues. So it has to be withheld from the miner who mines the block. The simplest way to do that is with a fee burn. And of course, there's lots of other reasons why people like a fee burn as well. But section eight three points out that the game theoretic properties are really just as good as long as you pay those base fee revenues to somebody else. For example, and this is a proposal I've seen by Vitalik and possibly others. For example, you could instead pay the base fee revenues to miners of future blocks. So for example, the next 1000 blocks you can spread it out equally. And then there is no fee burn. Basically just each block now has kind of a bonus added to its block reward, depending on the sort of base fee revenues from the previous say 1000 blocks. So that's one of the main alternatives suggested which I actually have not seen discussed so far. And then the other one is a version of the 1559 mechanism where instead of the tips being user specified, you hard code them into the mechanism. And this has some problems like you would expect sort of off chain tip markets to emerge. You know, I give no opinion on whether that's a deal breaker problem or not. But you would expect that to happen. On the other hand, it's definitely simpler to have hard coded tips and it has some nicer game theoretic properties, which just explaining that we get into the weeds. But so there are some nice aspects of that second alternative design that I call the TIPLIS mechanism in the support. And then the last section of section eight is I talk about the base fee update rule. And this again, I sort of seen people coming up with very reasonable requests that they should be analyzed from a controlled theoretic perspective. I totally agree. I think actually it's probably a quite easy control theory problem if you found an expert. But in that case, arguably the most sort of arbitrary feeling aspect of the 1559 proposal is the specific way that the base fee evolves over time. So the functional form, all the choices are sort of natural. You can see why one would make them or why they're a natural guess, but the functional form is sort of arbitrary, one plus an adjustment factor. There's two magic numbers in the rule. So the one eighth, which controls how rapidly the base fee can increase or decrease. And then also there's this question, there's a magic number of exactly how much bigger should the maximum block size be compared to the target block size? So in that section, section 86, I try to clarify all of the assumptions that are baked into the current update rule. And what are the different, what are some different dimensions that they should be experimented with over time? And it may be hard to iterate on the update rule until there's actual data from a real deployment, just from the arm chair. It's hard to have a compelling case of why something else would be better than the current one. But I just wanted to, you know, a heads up that probably this will want to be revisited over time, like the various other parameters that are revisited with every network upgrade. And then in the last section, section nine, I talk a little bit about the other benefits of 1559. So the report focuses just on sort of good UX, easy fee estimation, but of course there's lots of other reasons people are excited about 1559. So I just talk about what those are in section nine one, most notably the fee burn, but also kind of preventing economic extraction, having a reliable measure of kind of the current gas price that's hard to manipulate for use in smart contracts. And then the final section discusses the escalator, both kind of as a standalone proposal and also how that might be integrated into 1559. So that's sort of the executive summary of everything discussed in the report. And obviously people have specific questions about parts of it. Very happy to address those. Thank you. Yeah, that was great. Does anyone on the call have any questions, thoughts? I have a question, which isn't necessarily something explored in the reports, but I'm quite curious about your intuition with regards to it. The question is, do you have any thoughts on what you think would happen with, if you have two parallel markets running during a transitionary period, because one of the suggestions has been to have both the first price auction accepting those kinds of transactions with the 1559 in parallel. And I'm curious if you might have an intuition about maybe some emergence effects that might happen, or I don't know, just curious about your thoughts on it. Yeah, that's a good question. So just to clarify, so the sort of transition plan, I've seen a few different things discussed. My understanding is that plan A is you would have a period where legacy transactions would be converted or sort of interpreted automatically in the 1559 format by taking the gas price and interpreting it as sort of both the fee cap and the tip. Is that the specific proposal that you're talking about? That one as well, but there's another level to this, which is the one wasn't intended, but when you talk about Lyri 2 systems, because many Lyri 2 systems that we see think about also having your fee market running on top of the base Lyri's fee market. So there's the transitional periods where you do this translation, but also dual markets when you have second layer markets running on top of it. I see. So you're saying interactions between this change at layer one versus what happens upstream? But also inside the layer one itself, I guess these are two separate problems, but yeah, those are the two that I see. Okay, yeah. I agree, they're really, they're separate things. I mean, the discussion I've just seen around, I feel like some good thought has gone into, and discussion has gone into how to manage the transition by the 1559 team. And I have not seen, in some ways, I mean, I'm not in the trenches with the implementations, so I can't comment on that, but from what I've seen, the plan seems very reasonable to have a peak. So, and one thing that's nice about it potentially, one would hope. So first of all, people don't have to, wallets don't have to change initially if you have these sort of support for legacy transactions. And then you would hope that there would be economic pressure over time for everyone to switch over to 1559 format, right? Because there's really, basically two parameters to play with, the tip and the fee cap in 1559. And if you don't bother to pay attention to that, you're kind of stuck with this much sort of more restrictive way of bidding where you just set the one gas price. So that's one thing that I think is nice about that transaction. I mean, so first of all, I mean, it seems clear you don't just want sort of an immediate sort of hard stop where the legacy transactions aren't accepted. And so this seems like a really nice way to have them around for a while, but at the same time, there is an economic incentive for them to hopefully go away over time. The layer one, layer two interaction, I mean, I'd probably have to know more details about, I assume that happens in like, various ways for various layer twos and that's why I needed more details to talk about it at length. I will say, I mentioned this briefly that one of the side benefits of having this base fee is it should make it easier to sort of know what is like the typical gas price at any given moment, namely the base fee, unless you're in a period of rapidly increasing demand. Whereas if you just kind of looked at ether scan right now and you sort of look at a block and you're kind of like, well, if I wanted to associate a single gas price with this block, what would it be? I mean, you could use the minimum, the average, the median, et cetera, there's these statistics you could use, but those could be manipulated if people knew what statistic you were using. Whereas the base fee is hard to manipulate and again, outside of sort of sharply increasing demand, should give you a reliable measure of the sort of current gas price. So my hope would be that that would be a quite useful additional functionality or really an improvement for interactions with layer two down the line. So I think, hi, Tim, I think the easy way to think of the layer two thing is you have the layer two chain generates blocks at a higher frequency than the layer one chain and it's using within its own domain, the exact same algorithm. So it takes a bunch of transactions, it generates blocks, it then publishes those blocks on L one in an L one transaction and that's basically all there is to it. Okay, thanks, Rick. I suspect what Fred might be referring to is before the first version of 1559, we had two transaction pools at the same time. So within a single block, you'd have basically two, half the gas was dedicated to 1559 transactions, half the gas dedicated to legacy transactions. And so I suspect the question might have been, what kind of interactions do you expect to see there? I see. If we did that instead. Interesting. So would there be perverse incentives to, you know? Yeah, to like, because it gets definitely more complicated for users can I have decide, you know, which is better for you? Do you want to do 1559 transact? Right, right. I actually do want to try for a, the gamble of legacy transact or gas price, you know? Right, exactly. And my recollection is just a part of why, correct me if I'm wrong, but my sense was that this idea was set aside in favor of this kind of default interpretation of legacy transactions in part because that problem goes away. Is that right? Yeah, that problem goes away and a couple others as well. Yeah. So, you know, I mean, it's not something I've thought about deeply, but you know, the latest transition plans that I've seen to me seem like a pretty smarter approach. Are there any drawbacks with this automatic conversion? So I'll have some comments on this transition, but perhaps Michel can, if I'm pronouncing your name correctly. So Michel wrote a notebook on the transition period between 1559 and legacy transactions. And maybe you can share it now. Sure. I guess, yeah, just before we go, are there just like other areas of questions that people had about the report? Because I think, yeah, the legacy conversion and whatnot is a whole other kind of worms. And like, yeah, I think we could cover it right after, but I just wanted to give the space if people have other questions that they wanted to bring up about the economic analysis first. Yeah, I have just one. So go ahead. So I have one question. I think it's particularly, I mean, slightly outside of the report, but still very relevant. So if we look only in the transaction market as something that exists by itself and the transaction value is always external, there's no other market to relate to, it's all fine. But what if we have a decentralized finance market where miners can hatch the cost of collusion, like of the attack, if they can actually benefit from the higher fee burning or the fees going down? In particular, we actually work on the project where miners would be able to make financial transactions where they would benefit if the fees go higher or lower. And if they can actually make big bets on these, then they can cover the cost of attack. Did you consider this kind of co-existence of two markets, the decentralized finance market and the transaction fee market? Yeah, so not explicitly. I mean, I find it quite interesting. And in particular, I mean, I think where it ties into the report is the discussion around how to get minor buy-in into the proposal. And so you could argue to what extent is it necessary and then if you agree that it's necessary, you can argue about how you might wanna do it. And so I think having some kind of financial instruments so that you can argue that miners are gonna win either way, especially if it's something where they're particularly well positioned maybe to make smart bets on them. You could imagine that sort of speeding up sort of adoption, sort of lowering the current sort of pushback that I believe the community is seeing from miners. Yeah, great, thank you. When you were analyzing the, sorry, which is the section, I had a question and I just lost it, damn it. Someone else go. Yeah, I would just like to make a brief comment. I mean, as the person who proposed the two-stage, you know, having two transaction types and two transaction pools, the purpose of that was not game theoretic. It was to force the removal of the dead code path of having one transaction type be interpreted two different ways. Just to clarify. Yeah. There's also something else that may be worth noting. I'll just brush it over quickly, which is you can see the layer two transaction fee market interacting a similar way to this first proposal of two transaction pools because you can see it as part of the layer one gas being used and reserved for a separate transaction fee market. So I think the interactions and those might be comparable, but not with this new transition period. Yeah, that's an interesting idea. I mean, that's a really interesting idea. I think the difference is that the, well, I think there's two points and I'll be, say the nicer one first. The, let's say you're using, you know, the operator of the layer two, whether that's a federation or an individual or whatever, they ultimately have some discretion, right? So they can, they have within their protocol the ability to not participate in the next layer one block. So it is a segmentation, but the two different pools are under different authorities. So that's a pretty big difference. And then the sort of corollary to that is 1559 doesn't stop the layer two operators from bribing miners, which is probably what they'd end up doing, practically speaking. There's one more question. I think you already kind of answered this, Tim, but Nick Johnson, who's been one of the, I guess, friendliest critics of 1559 and really wanted to see your report. He posted on Twitter yesterday. I'll share the actual tweet in the comments here and I'll try to just summarize his question. Basically, in section 745, you explained that miners could make these cartels, but it hasn't happened before. And he says, this is probably not a sound way to think about it. And basically that the incentive structure is still the same under 1559 as it is now, but it fails to consider the magnitude is very different. Today a cartel benefits from the difference between monopoly pricing and market carrying price, but under 1559 it would benefit to the tune of the difference between the monopoly price and the cost price, which is much larger. Yeah, so I guess you mentioned earlier that the cost of collusion kind of stays the same, but the benefit goes up. I assume that would be kind of the same answer here to Nick's concern. Right, so I tried to be careful on this point in the report. Maybe, I mean, maybe there's a way I could have written it that it would have been clearer. But I guess I would point to, you don't think the very first sentence of section 74, just when I start classifying different types of matter of collusion, the very first sentence of the section is, I offer no prediction on whether there will be collusion under 1559. So, okay, if I don't wait, so then what do I do? I just say, let's sort of make a, do an observational study of the status quo under first price auctions, brainstorm possible reasons why we're not saying collusion, and then assess to what, and then for each of these apparent impediments to collusion, do any of those impediments break down because of something specific to 1559? And I agree to no. And if sort of in the top 10 takeaways, it's number five, right? So, the assertion is not that collusion is as unlikely under 1559 in the first price auctions. I didn't say that, I very intentionally didn't say that. I just said the impediments are as strong. They meaning like the problem is as difficult as far as I can tell for miners to collude under 1559 as it is now. Now, that doesn't, again, I'm not saying that collusion is less likely for exactly the reason that Nick mentions, which is that they might see either just because the economic reasons are more at stake or they may feel betrayed by the community and therefore sort of less altruistic. And so that's covered in the section 746. So right after the, sorry, in 746, I guess, the caveat section. And there again, there's a sentence that says, the strong negative reaction, I was referring to your survey, your questionnaire by Tim, the strong negative reaction may galvanize miners to sustain collusion to a degree not yet seen under the status quo. So I completely agree with Nick's point. I tried to make that explicit in the report. Perhaps it should have been positioned a little differently so it sort of stood out more. But I actually don't think there's any disagreement there. Cool, thank you, sir. This gets into the question that I was gonna ask which is, and I can just talk about instead. I think the magnitude is off by a pretty large margin here, I believe, because right now if miners were to 51% collude, they would make double the block reward plus transaction fees. With 1559, a 51% collude can still make double the block reward and they get a little bit more transaction fees on top of that. And while we have seen some big spikes in transaction fees periodically, the baseline is still way below the block reward. And so it's like, if you conclude with 51% and you can make $100 million, or now with 1559, you include and make $101 million. And it's like, I feel like that order of magnitude is nowhere near enough to tip the scales. Just because the gains from colluding with, for colluding and manipulating 1559 transactions are just so small compared to colluding just with any type of transaction mining. Just by censoring 49% of miners, you double your money. It's easy money right there. So if you can collude, you can make way more money doing other things. And so that's where I feel like the real argument here should be is that the order of magnitude is just too small, okay? Yeah, so I think you might well be right. I guess in the report, I didn't wanna presuppose how the base fee revenues would compare to the block reward. I just felt like- That's fair and reasonable. I thought any prediction I made on that point I might just look quite foolish a couple of years from now. Sure. And right, so I guess, maybe that was the main thing I wanted to say, yeah. Any other final questions for Tim? Okay, yeah, thanks a lot, Tim, for sharing all this. This was pretty helpful. And I'll make sure to link the report and the notes that we have for this call. Yeah, and so I'm gonna have to sign up, but I mean, just sort of a general comment. I mean, this was not like some report I envisioned just like issuing into the world and then never discussing with anybody. So I mean, it's really a report I made. The point of it is to be helpful to the Ethereum community. And so if there's follow up questions or anything that would make it more helpful, I'm obviously very receptive to that feedback and future discussions, so. And what's the best way, maybe for people who are watching the recording to reach out to you? So email, tim.roughgarden.gmail.com. Great, thank you very much. Thanks everyone. Cool, yeah, so Michel, I hope I'm getting your name right. Yeah, do you wanna go into your report around the legacy transaction simulations? Yeah, sure, so maybe I will try to make it quick, just I will give you the summary of what I did. So maybe to start with, what was the goal of this simulation I created? I wanted to answer the question, how legacy transactions will be treated versus 1559 transactions by the network when 1559 is in use. I wanted to answer the question whether maybe the network will give preferential treatment to one type of transactions or other types. So I created the simulation that is based on the library ABM 1559 that was prepared by Barnabé, sorry if I also pronounce your name wrong. I introduced some changes, but I use it this library heavily. So in my simulation, I distinguish, let's say three types of transactions, so or maybe three types of users. So we have a legacy users that for some reasons don't use ABM 1559. And when this kind of users submit transactions, these transactions have gas premium or tip set to the same value as max fee. Then we have 1559 users that utilize 1559, but here I decided to distinguish, let's say knife user, which always sets a gas premium to the same value, one way. So these kinds of users do not analyze transaction pool to figure out what is the optimal, the best value of gas premium. And we also have something I call clever 1559 users that look at the transaction pool and try to figure out the gas premium they should use in order to be included in the blog as soon as it is possible. I forgot to say that legacy users also try to analyze transaction pool in order to figure out the best gas price. And in each iteration of the simulation, I generate the same number of legacy transactions, life transactions from life users and transactions from these clever users. And what is important, I have when we look not at the pairs, but at the trio of transactions from each of these three kinds of users, they have the same value, I mean business value, the user associates with given transaction. Why? Because I want to compare let's say apples with apples. If I have in the, and I think that if I have in the transaction pool one legacy transaction, one clever transaction when life transaction with the same business value, then I can compare them in the reasonable way. And as to the conclusions, let's say the most important. Okay, so firstly if we look, I calculate a lot of statistics and metrics. So I will only tell about the basic ones. But if we look at these statistics, we can distinguish phase one and phase two. By the phase one, I mean a situation when a base fee very, very quickly, very dynamically grows and phase two when this base fee reaches stabilization. So in this first phase, all these statistics I calculate like average gas price per block, average waiting time and many different. They change very, very dynamically and it's quite even quite difficult to reason about this phase. Nonetheless, this phase is quite short. And then we have this second phase when it is much easier to reason about the behavior of the network. So according to my simulation and I think it is good information when the base fee reaches stabilization, transactions from all these free types of users will be included in the blocks. So we don't have the situation that for example, only legacy transactions are in blocks or only 1559 transactions are in the block. However, in this first stage when a base fee grows quickly, here situation is different because in this stage, I observe that mainly or almost only this clever 1559 transactions are included in blocks. As to the... And what it means is that it also means that almost only this clever 1559 users will take advantage from the lower values of the base fee. When it comes to the gas price, here, let's say, contagions are not surprising. These nine 1559 users will pay the list. Why? Because they do not try to be clever. They simply always pay the same gas premium. Whereas a clever 1559 users or legacy users who looks into transaction pool who wants to pay more to be included in the blocks, they pay slightly more. But if we compare legacy users and 1559 users, they more or less pay the same. What else? I implemented very simple transaction pool. So I simply assume that I can have some maximum number of transactions in the transaction pool. And when there is more transactions, I simply remove from the transaction pool those the worst. What I mean by the worst, I saw transactions based on the gas premium they offer to the miner. And what is important, I observe evictions from the transaction pool almost only in this initial phase. Then when base fee reaches stabilization, there are almost no evictions. And transaction pool is not full at all. What else? One more thing, but this is another conclusion. Let's say that is quite, let's say, natural, nothing surprising. I also calculated average waiting time of the transaction pool. So of course this knife 1559 transactions, which always pay the same gas premium needs to wait more than legacy or clever 1559 transactions to be included in the block. However, I spotted one interesting thing though I cannot explain that. Now why it happened. Sometimes I observed that legacy transactions wait longer and sometimes I observed that this clever 1559 transactions waits longer than transaction pool. I need to analyze it more carefully to explain why it happened. Okay, so I think that those were the most, those were the bullet points, the most important conclusions I noticed. If you have any questions, feel free to ask. Hey, hi. Yeah, I really enjoyed the notebook, Michel. I think it was a really great use of the library actually. And I've been gotten to play around a bit with it since the start of the week. So we've tried, we've been looking at how to, let's say look at oracles that give like first price auction legacy users information about the current price that they should pay. One piece of code that Fred added was the idea that users were using, so let's say we are after the transition, we have 1559 users, legacy users. And the legacy users are deciding their fees based on the oracles, which is also kind of what you are doing in your notebook. And so when you have these oracles, like the presence of a base fee, even though it's implicit for the legacy users, it has a sort of stabilizing effect on the oracle. So let's say I have 50% of my users who are legacy and 50% of my users who are 1559, you can think of it as some of the users know the correct price, that's the 1559 users. And so since they know the correct price and that's the price they're putting in their transactions, they're actually tilting the oracles toward giving that price for the legacy users. So I think of it as almost like the first price auction is with boiling pot of water and the 1559 users are just throwing cold water like lowering the temperature. So allowing the legacy users to almost like have a better, let's say estimation of the current price of the market, although it's a complete, it's very implicit like it's not direct, but it goes through the oracle. And that may explain also why by the end when basically stabilizing, you find that let's say legacy users and 1559 users are included in the block in almost equal proportion as they are when they join the market. So yeah, I think the idea that we had in mind was that since legacy transaction users would be overpaying, they would tend to maybe have some sort of priority, but that's no longer true, let's say when base fee starts to stabilize because when that happens, the oracles will start to sort of align themselves with the base fee and provide to the legacy users the actual base fee. And so you should kind of expect this convergence. I don't know if it makes sense and if it's maybe something that you noted as well. Yeah, I think that my simulation results totally confirm what you've just said. Maybe just one comment. What you said is totally true, but only if we assume that these legacy users will be, will not overpay too much because at least in my simulation, yeah, legacy users ask oracle for the best price, but this oracle returns the minimal optimal, the minimal price. However, if we have some legacy users who really want to pay much more to be included in blocks, then probably even if base fee stabilizes, we will see that, I think that we will see more legacy users in blocks. Yeah, actually it's true, but I don't think it's true to the magnitude that we expect. So for instance, most of your oracles are based on some kind of percentile of the past transaction. So you look at the, let's say 95% top paying transaction and you set the, so like metamask when it gives you the fast price, it's kind of like this very high percentile. But if you have like base fee, which is kind of stable and most of the transactions, even some of the legacy users who are using the slow or medium, who might be actually targeting the exact base fee, it might start even, let's say tilt the fast oracle, so the one that would make you overpay towards the base fee itself again, because it's sort of a distribution thing where, because the fee variance is reduced in the block thanks to the base fee, you also have this effect that propagates to the oracle itself. Unless the oracle is some sort of fixed, let's say I make you overpay by five good, but I think most oracles are based on this idea of looking at the distribution of transactions and setting the price at least. Okay, so maybe one more thing, because it seems to me that we will see this stabilizing stabilization effect, only if we have a big enough number of 1559 users using the network. So here, of course, it's only guessing. The question is how it will look in practice. If we have, let's say 80, 90% of the legacy users and only 10% of 1559 users, I think that it wouldn't look so nice when we have 50% of legacy users and 50% of 1559 users. Sorry, I would just like to comment on that. Yeah, I think that's an extremely good point. And I think to me, I mean, I really appreciate all this research and I think it's really interesting, fascinating work. As a practical matter, if collectively the community can do something to ensure that 1559 gets adopted by someone like, say MetaMask, then we, a lot of this simulation, we don't really have to worry about these corner cases, right? We just know that the majority of people will use 1559. Well, I think, I was just gonna add, I think we had a bunch of discussions about this in the past as well, but we can start with this neutral approach of reaching out to folks. There is already, I think a lot of support for 1559 in the community. So, step one is like you reach out to folks like MetaMask, like Coinbase would not ask them to support this. And then step two is like, if that doesn't work in the next hard fork, do you wanna add like a carrot or a stick, right? Which regards to gas prices or whatnot, but I think it's hard to predict in advance what the adoption rate will be. And therefore to come up with like a good plan for like, how do you get the people who you would have wanted to adopt it that are not adopting it to actually do so? So, I do think that they are self-stabilizing incentives in the sense that the less stable the base fee behaves just because a few people have adopted one who have nine so far, the more incentive there is to actually adopt one who have moved to one who have nine transactions as an individual user, just because like again, with one with like a different section you tend to overpay or just general, it's less controllable. And so basically like the few people using one who have nine the more like attractive is for individuals to move over but through like profit from the increased stability locally. And so I would assume they like it very quickly kind of would converge to a situation where enough people moved over that the overall situation becomes relatively stable at least on a most, but of course that's hard to tell. That's a really good point. And I think what's interesting is a lot of the projects we spoke to as part of the outreach that we're managing transactions on the behalf of the users really care about giving their users the best price and the best UX. So if there is kind of an incentive to do so, I suspect we'll see a lot of projects wanting to differentiate by adding that, yeah. So another consequence with this insight that the oracles converge is that the more 1559 users you have, the easier it is for legacy users to keep using the legacy transaction like the less they would overpay because the better their oracles would kind of tend to become so bouncing on what Rick said. If you get the 80% users by having metamask switch to 1559, then this long tail of users who are not switching is actually not that bad for them. They get a somewhat correct rate. Still you have like, of course, it's kind of a gradient between if everybody is using first price auction versus everybody is using 1559, but yeah, most users are 1559 users. Then I guess from a legacy user perspective you might not be able to think that much either. And I think that's not the end of the world, right? Like the direction we're going in at the protocol right now is like, if we have support for these 2930 transactions, these 1559 transactions, the legacy transactions, I suspect we'll have to carry a bunch of different transaction types for a while. So I don't think, so I think there's maybe a more meta discussion about like how do we deal with this long tail of like older transaction versions that's kind of out of scope for 1559. And if we have some reasonable, you know, intuitions that there are good incentives for a large portion of the network to adopt it, I think that's probably sufficient given, yeah, that we still have to maintain some types of legacy transactions anyways due to other reasons. That actually leads me to a question I was having earlier. So in case that the transition, like the initial, the transition to 1559 goes smoothly and there's a lot of adoption early on, a lot of people earlier basically talked about transition periods that would imply that there's like some end of the period, but then presumably you would completely phase out legacy transactions and just want, I mean, you're basically just saying that that that might not be like unnecessarily at least like immediately or something. I was wondering, is there even, is there any like important reason why you would ever want to fully phase out legacy transactions instead of just continuously converting them forever? Because I mean, there are always these edge cases, maybe someone is using some hardware wallet where they really don't have a way of generating the transaction types or something. The short answer is client code complexity. And the way, I guess the scenario under which it would be very helpful is if you have clients that don't want to sync from Genesis for a reason. So, some people have talked about like Regenesis, things like that, but maybe a more possible or a concrete thing is like, assume there's the ETH-1, ETH-2 merge, right? Maybe people want to write clients to be like an ETH-1 engine for ETH-2, but not sync everything since ETH-1's Genesis, just like start processing stuff at the merge block. Then if you got to a point there where say, I don't know, legacy transactions are not supported anymore, they just don't have to implement that and it makes the client much easier to do that. So I think that's the main argument in favor, but when you talk with teams like Gath or other client teams that need to support clients from Genesis, it doesn't really make a big difference that, say to us on Basu, if we deprecate 15 to 29 transactions or not, because we still need to validate all the blocks where there were legacy transactions. So that means we need to keep that code in the client as well. But I think that, yeah, the biggest benefit is you could build a client from the spot where you don't process those transactions anymore. Yeah, I mean, at the time, my thinking was that there would just be potentially a lot of complex dynamics by keeping this old, by having two transaction types that are possible. And I just thought it was really difficult to reason about, I was having a very difficult time figuring out which one, what would happen? And so it's better to just close that door both from an engineering perspective, but as Tim points out, that kind of doesn't work because you have to replay from Genesis, but then also to sort of close that door in terms of economic exploitation. One can also mention a client that has, so for each time there's a fork block, the consensus rules change. One can imagine a client architecture where you have a separate engine for each fork. And so it'd be nice if your new engines don't have to speak, but you don't touch them. It's like, you know, your version one, you don't touch it, you maybe get security updates, but that's it. Whereas you don't want your V1 code sitting in your V7 code base, which may be completely isolated. Again, it depends on your architecture. I suspect in practice, no given the current clients that exist and teams working on them, nothing like that will happen before an E-22 merge. Yeah, I'm happy to be proven wrong, but yeah, my hunch is this is the only kind of point at which it makes sense to change the architecture so much to get there. This is a bit of a tangent though. Yeah, did people have any other questions about the legacy transaction simulations? If not, yeah. Ansgar, I think you had some updates you wanted to share about the transaction pool management, which we spent a bunch of time talking about on the last call. Sure, yeah, and so just for context, I haven't been following, so I've been following the 1559 efforts loosely, but I haven't joined most of the previous implementers calling everything. So I might not be fully up to speed, but basically like Tim made the quality, we talked like I think two weeks ago or something and he mentioned this kind of that there were some open implementation questions around mempool handling and so we kind of decided to look into that a little bit. And so I basically wrote up some of my thoughts around specifically the sorting because I think most of the mempool related questions, like how to handle 1559 transactions differently from legacy transactions really fall down to sorting. And so my like basically initial conclusions and again, those could be off. I'm definitely not yet an expert or anything, but it appears to me that there's really basically two different types of sorting that usually happens in mempool. The first one is just for miners. That's like basically on the high end of transactions, choose having an efficient way of finding the currently like highest paying transactions. And of course, highest paying meaning like those that basically have the highest effective tip. And in currently, of course, you just use the gas price for that. And so currently, for example, in get the way that's implemented is with like a max heap where you basically have like a partially sorted list but buy by maximum gas price and you just traverse that to find the highest paying transactions. And that doesn't quite work for 1559 because unfortunately, like of course, I had these little diagrams but of course observation, I think is an old one that was 1559, the relative order of transaction can change when the base fee changes because of these two parameters. So sometimes, so basically like for low base fees, usually transactions on the static period where they basically pay their maximum tip that they are willing to pay. But then at some point, they reach this kind of inflection point where the base fee becomes so high that it starts eating into the tip. They're still willing to pay. And so for different transactions, that point is at a different location. And so when it can be that transaction that was willing to pay a higher tip that that now goes down. And now basically all of a sudden it's willing to pay less than another transaction. And so the relative order can switch and so you can't have like a static sorted data structure anymore. And however, like I think specifically for the question of mining, it seems to me that you can kind of find a somewhat more clever but not all that much more complex way of going about it. So the main observation that I had was basically that within this kind of what I'm called it's called a static state where like you are able to pay your full tip, right? And transactions that are all currently willing to be able to pay their full tip, those continue to have a static order because while they are in the static range, of course that's a static amount. So that's the ordering stays constant. And then within the declining phase where your tip is being eaten into by the base fee basically transactions within that also because it's like a linear one to one relationship like basically one more way in the base fee is one less way in your tip. And so that also basically means that they all shift in the same speed. And so they never intersect. So transactions in that state also never kind of switch order. And so it's really just about transactions where they're basically switched between those two states. And so I think what you can do is basically just have a somewhat of a basically you can have like a one partially sort of heap for the static transactions, one for the dynamic transactions. There are a few questions though that I haven't really that I don't think I have quite clear answers yet. So basically what you would have to do every time a new ball comes in that changes the base fee you have to kind of process the ones that now passed they are like inflection point there and now switch between the two states. And it's not quite clear how you could effectively remove them from because you don't actually want to do a lot of removal from these heaps. So there are a few intricacies, but I think generally directionally, this is like a really solvable problem. And then interestingly though, like the other sorting problem in Mempool is on the other side, right? Not the high patients actions. And that's only for a minor problem, but like for eviction, right? And then you need, you wanna find the basically bottom tier transactions to get rid of. And this is like it seems to me to be a little bit more complicated because in our legacy transactions, you again just use the gas price. But what you're kind of optimizing for is you wanna get rid of the transactions that have the lowest chance of being included, right? Because those are the ones you wanna drop. And previously with the like static order and everything that is a very simple decision to make. You just look at the gas price. Now with 1559, again with like the dynamic order that can shift over time, it's not clear anymore, right? Just because the transaction right now would be willing to would have like a lower effective tip than another one, doesn't mean that it has like a lower chance of inclusion because maybe as soon as the best base fee goes a little bit higher than the transaction all of a sudden is willing to pay more or something. So you kind of have to have like as implicit assumptions about the base fee behaviors. So basically what the metric you would want to use is like the average, or like the average effective tip that you expect like the expected value of the effective tip of the transaction over like a probability distribution of future base fees. And of course you don't wanna do it or they're complicated. So the question is just can you find like a simple heuristic that does something of that sort that is good enough I mean, you don't for eviction, you don't really care if it's like a intellectually completely perfect solution. It really just has to be practical enough but it has to be practical enough and like a lot of different paradigms. So slowly changing base fee like quickly, quickly increasing one, quickly falling, highly volatile, low volatility, all of these different paradigms. So basically the goal just is find a heuristic that is like really robust and all these paradigms. But then also you can implement with some efficient data structure and what you don't wanna do is basically every single time a new block comes in, you don't wanna go through your whole mempool, recalculate this expected value for every single transaction and completely resort your mempool. That is too much, I think at least, that is too much housekeeping effort after every single block. And so basically finding some heuristic that you can find some order that you only have to update slightly every single block or something. I don't know, I have like, I don't really have good concrete ideas around that yet. I think it seems like that should also be kind of solvable though, but it's a little bit more of a complex issue. So these are basically like my thoughts on sorting. So there's maybe one more special case of like transaction replacement, but I think transaction replacement really is not all that complex because there you really only want it to be predictable by the user because transaction replacement where you just replace the transaction pending transaction because you wanna bump it basically. I think you can just have very simple rules that protect you against those issues, but also kind of keep this structure something. But yeah, basically so, so I don't know, maybe I'm not sure if that was clear or something. And again, I might have been missing things or just previous write-ups or whatever on the topic, but that's my rough outline. So like high end for miners, low end for eviction for all nodes, and high end you want an explicit solution, low end just some heuristic that's good enough. That's kind of where I'm at right now. I like that analysis. Thanks a lot. I just, I do have one question, which maybe I also not being in every meeting missed something, but when you're talking about evicting transactions, isn't there a velocity? Like, isn't there a maximum rate of change of the base fee such that you could say, like it would be a week before this transaction could be included or a day or there's some longer bound where you know that the velocity of base fee changes would certainly exclude a transaction from a reasonable amount of time. Yes, there is. I personally advocate for using a strategy like that. The caveat, we have to remember though, is that in a time of rapidly increasing base fee, it is possible to see the transaction pool filled entirely with transactions that meet that criteria. So even if you say that evict any transaction that cannot be included in the next block, it is still possible to have a transaction pool that is entirely filled with transactions that meet that criteria and you still need to evict. So you still need a secondary eviction strategy in that case to deal with that situation at the least. Yeah, so I would agree that basically like a simple yes, no rule always runs into these edge cases where you can construct a situation where it's basically very close, but you're still just below whatever base fee that they need or something. And so some relative metric where you have like one value per transaction that you can assign and then you just compare and evict those with the lowest value. I think that is preferable, but I do think that it illustrates how while there is, like again, there's some uncertainty where transaction order can flip, there's still a lot of structure in that like it can only flip to a limited extent because the base fee can only change at a certain rate and all of these things. So I think you still have, you can still come up with sorting that is mostly stable, that they kind of, when the base fee starts to change, it only changes a little bit. And so you basically only have to do a little bit of kind of updating of your sorting there. But yeah, the goal really should just be to be able to to identify the West transactions, no matter like how close they are to being includeable or how far away or both of that. So another thing that like client brought up last week or week before is that if we can get, if we change the minor bribe or TIP or whatever I wanted to call it this week to be static, not that, I'm willing to pay this much base fee up to this much base fee and I'm willing to pay this much to the minor and those two are separate values. And so the total you pay is the sum of the base fee plus the minor that greatly simplifies the transaction sorting problem, but it introduces a new problem that it greatly increases the complexity of upgrading legacy transactions to this new transaction type. So if when people are thinking about this problem, if you can solve that problem, the upgrade, how do we upgrade legacy transactions when the TIP or the minor bribe or a gas premium, whatever you call it is static, then this whole problem of transaction sorting goes away and we're back to basically legacy style, very simple sort. I do have to say though, because we talked about that quite a bit after that and the impression that I got, and of course, if you could correct me there, but the impression that I got there is that that is not indeed actually correct because it turns out you still, because like for these decisions, you still want to do this kind of fix like the chance of inclusion or something and just because now it's a cliff. So basically you have a hard drop off that still kind of gives you the property that there's like a non-static order basically because there can be a transaction that is like more easy, that's basically higher paying for a long time. And then it just instead of gradually dropping off, drops off to like a inclusion chance of zero basically, but it still has this property that you can have intersections between the kind of the relative value of two transactions. And so it's not in fact that it now all of a sudden basically it's a static order again. You still have the property that basically orders dynamic and flips. And so you kind of have to do this expected value thing. So I personally don't actually think that this is basically gets rid of them. So Micah, you were saying that the problem is the promotion. Yeah, that's fair. I do think you are correct. Can you guys hear me? Can you guys hear me? Yeah, go ahead. Micah, you were saying that the problem of promoting the legacy transaction types under that suggestion that you had is because there's now just the static fee that does it. There's no basically item that depends on a per gas basis, right? It just doesn't fit with the model we have for upgrades. So for upgrades, the model we have right now, of course, is we just say the legacy transaction gas price is both the base fee and the minor bribe, both values to the same thing and everything kind of just works out magically. If these two values were, sorry, the fee cap and the minor bribe. If the minor bribe and fee cap are now separate and so they're additive onto each other. So the thing you pay is now base fee plus minor bribe we can no longer just set the fee cap and the minor bribe to the legacy transactions gas price. That doesn't work out forgotten, but it's. I would also argue that the one other major topic that that solution has is basically that you have this, just the behavior again of like, basically your transaction is willing to pay a certain tip and then as soon as under like the dynamic approach, basically usually you would have this infection point and then the tip you're willing to pay slowly degrades but you can still be included in the block whereas under the new proposal, basically, you could at that point, you could just no longer be included. And so from a UX point of view, I think it is also a little bit problematic that now you could have transactions that are like price-wise perfectly able to get included but they can't because of this rule. So, sorry. So I'm personally a little bit skeptical of this approach. Sorry. This is something I thought so. So I guess just to make sure. Oh, go ahead. No, I didn't only want to say, but it is like a very interesting like alternative approach to think about because I think if I remember correctly, that was actually the one that kind of when my client and I were talking about it, that was the one that kind of let us realize that basically within these different stages, you still have the static order. So it's definitely like a very interesting kind of thought experiment, but I don't personally like it as an actual design. And so just so I understand, it seems like the next step here on the eviction side is finding, is there a good enough heuristic that we can use which might have some failure modes but that should work most of the time. Is that right? Yeah, that's how I would at least see it. Got it. I think the most important thing is that we do not have a failure mode that results in a DOS factor against clients for the eviction strategy. Pretty much anything else is, almost anything else is optional. That being said, there are like if you are, the worst case eviction strategy is you're evicting from the most likely transactions to be included, right? It's like the pathological failure mode. If you imagine that, then that can become a DOS factor because now clients are constantly dropping transactions that then they'd have to get fetch again as soon as they get included into the next block. And so we do have to be careful about that, but that's really the core is don't allow DOS attacks. Does any of this get easier to solve? I remember hearing, forgive me because this is my first 1559 call, but I remember hearing rumblings about potentially enforcing at the protocol level that blocks are filled first at the IP 1559 transactions. Does that solve any of this? Because you only have to relatively order them, like only order 1559 transactions among themselves and then legacy among themselves. So I believe it does. Oh, yeah. So I'm not sure the problem is that even within 1559 transactions, if you don't have any of these legacy conversion transactions in there, I think within that block, you still have similar issues at least. Maybe it's easier when most of the tip is somewhat in a similar range or something, but I think, so like, of course, for the legacy side of things, it would make things easier because then you have the same properties again, but I don't see at least why that would solve the issue on the 1559 side. Maybe it would make it a little bit easier. I'm not sure. So what if you add also the static minor fee instead of the per gas minor fee? And now can you deterministically sort those, the 1559 pool? You're saying have two transaction pools, one that is legacy transactions that once they're included in a block, they look like 1559 transactions, but the second pool is actual 1559 transactions but they have the static gas price, is that versus gas premium, is that correct? My understanding? No, I was suggesting, well, maybe, but so I was suggesting that we have the 1559 transactions with the fixed tip and then we just have legacy transactions as they always were in a different transaction pool except they can only be included in a block after 1559 transaction. They can only fill up empty space, basically. Yeah, and so they're, like you can affect them however you want, if there's, or you can affect all of them if there's only 1559 transactions. And then the 1559 transactions as they are now also would have this sorting problem, I think, because of the per gas. So I don't know that we can have them be elastic I don't know if we can have them be, that second pool be elastic because we don't know if we should. Like as long as you send out a block that has 1559 transactions first, that is valid. Don't know if we should expand the block. So if the block is under full, like less than, it's... You're really breaking up, Mika. Yeah, I think you would, in effect, just expand the block one block late. And I think expanding the, I think having the 1559 take up all of the block and then have the original transaction type take up the remainder. And then if that was full, expand the block. That I think is a really weird game where it makes sense to like do all sorts of weird stuffing and price manipulation because now you can control the size of the block in this kind of counterintuitive way. I don't know that all of those games are worth the algorithm benefit that you're aiming for. Okay, cool. Yeah, I just remember hearing this as a suggestion, but I never heard kind of the counter argument to why it wouldn't work, but that makes sense. Yeah, just because we're running low on time and we still have a couple other things to cover, is there anything else regarding this that people really wanted to bring up now? Okay, if not, I think, yeah, the last big thing we had is Abdel has made some progress on generating test nets with a large state. Abdel, do you want to take a few minutes to kind of share it out? Yes, sure. So we want to see how the network would work with high block elasticity, like can the network handle twice the block size as now? And to that, the first approach was to kind of fork main net, but we don't really like this approach because it implies to do some tricky things in the code of the Ethereum clients and we don't want to merge that code because we don't want to introduce new attack vector. So we wanted to explore another approach, which is to have basically to not touch at all Ethereum clients and to have another standalone service that interacts with clients and to see how quickly we could generate a state comparable to main net. So we implemented the proof of concept for this service. So I will show you. So can you see my screen? Yeah, yes, okay. So basically we have a standalone service that will interact with the Ethereum client using the RPC endpoint. And we have a few REST API. So basically API to under task because it is all long running processes. So we need a way to on the client side to see if the task is completed and the duration of the time of the task, et cetera. And then basically we only require to have two deployed smart contracts. So one to create accounts and one to fill the storage, basically. So the first version we were to create account we were only doing basic transfer. So without using a smart contract but it requires to handle a large TPS and this is more efficient to create a bunch of accounts per transaction. So this is why we create the account directly in the smart contract. And also you can monitor the number of accounts created. And also, yeah, we have the other contract that is responsible to fill the state storage. And yeah, basically I will show you a quick demo. So first I start one Ethereum client with a very low difficulty to quickly produce blocks, okay. And then I start my standalone service that has the RPC endpoint of my Ethereum client. And we have a web application. So yeah, basically it connects to the Ethereum client and it retrieves some configuration parameter. So for the moment I don't have anything deployed because I just deployed the network from scratch. So the first thing will be to deploy the two contracts required, okay. The second one. And now if I go to the configuration I can see the addresses of the deployed contract and some parameters directly queried from the smart contract. So I have not created anything for the moment. So we start, let's say by creating 10,000 accounts and 15,000 entries in the smart contract. Okay, so task out pending. Let's wait a few seconds. Okay, so our configuration is done and the state storage is done as well. And if I query again my smart contract I can see that 10,000 accounts have been created and 15,000 entries have been created in the smart contract. And I also have the address, the last created address and to show you some results. So basically we tried several iterations. We started from 10K accounts and 10K entries into the smart contract. And between each iteration we multiplied by 10 and we have measured the time needed to build the state. And so the last iteration was 100 million. So this is something comparable to mainnet. And it took basically four days to build this large state. So the two processes have been done secondarily. Next step will be to try that in parallel. And obviously we did some tests with the single node network. And if the approach is reasonable for you guys one next step will be to set up a new E1559 testnet and to kind of build a large state comparable to mainnet. And I think so we'll have to deploy multiple clients on each type, Bazoo, Netermine, Get. And I think we should try to run this service on all clients directly rather than building the state and then sync with the other clients that will be more efficient to make sure we all deploy our clients to the infrastructure and then we start to generate the state. And hopefully within four days or so we could be able to have something comparable to mainnet. And then we could start to play with the high block elasticity because we did some tests with the high block elasticity on the current testnet but the state is very small. And so we don't see the impact on the large state. And we started to measure the evolution of the block production time versus the number of accounts. So it does have a significant impact actually. So it will be interesting to see how it will work with the large block elasticity. And yeah, that's pretty much it. That's really impressive. I just have a quick question. After you've generated that spent the four days to compute that state, let's see it. I'm sorry, it doesn't seem to say the size. The size of the DB? Yeah. It's something like, I will show you, 237 gigs. So does it make sense to create a backup of that for the respective clients so you can run more tests or do you just wanna destroy it? My plan was to destroy it and regenerate something from scratch using the tool. Because the time needed is quite reasonable, I guess. Less than a week is, I don't know. And these didn't use 1559 transactions, right? Yeah, exactly. So we should probably have one. Oh, yeah, I think we just did it with legacy. But we should probably have, I agree with you Rick, that like once we do it with 1559 start transactions, yeah, we should keep that and not have everybody need to run a four day process every time. It does not really matter. I mean, to fill the network, we don't need to use 1559 transactions because most of the work is done in the smart contract anyway. So that won't affect the results. Yeah, oh, I guess, yeah. What we want is we want the network. Once we have the large state, we want whatever network to be able to run the fifth. Yeah, okay. Oh, so we could use that. Yeah, yeah, yeah. Put that in like a set of clients that support 1559 and then run the transaction generator tool, right? Yeah, okay. So I guess in that case, we probably should not delete it now. We probably should take it around. Okay, okay. So yeah, first I wanted to see if the approach makes sense for you and then we can see the next steps. So this is to check if the clients can handle the load at the level of the mainnet, right? Yeah, yeah. So with twice the block size of the mainnet, yeah. So you generate like 100 million accounts because mainnet is 100 million accounts and tokens. Yeah. And then there's also a smart contract which has a hundred million storage slots. Yeah, with 20 bytes per slot. Yeah. We're almost out of time. I know, Rai, you had, you wanted to bring up 2718. Do you think you can do that in like one minute or two? Yeah, I think if someone has arguments against it, then we won't and it'll go somewhere else. But I'm hoping that it will just push through quickly. So essentially since the writing's on the wall, 2718 is gonna be in Berlin. And the whole point is to introduce transaction types. Is everyone good with having AP 1559 transactions be a 2718 transaction? And we can just temporarily pick a value of like 15 for it and then pick a, what's it called? Like an incremental value once it's actually about to go into a hard fork. I guess my question would be, yeah, this. No good. How much, like does it slow down people right now to add 2718 support or not because we're already doing as part of Berlin, right? Yeah, I was gonna say that I think all the clients have it now and so it would actually just simplify the encoding, decoding code paths to just have that be a type. Rami, can you say yes if you have merged the master branch because I think I'm not sure they merge the master branch. Yeah, so actually we almost completed so we just need a couple of more hours. So today we are going to create food requests on the original GATH repo. And would you be confident to use 2718 type transaction envelope for it 1559 transaction? Have you looked at it or? No, not yet. So we are based on top of master so that transaction type of request is not merged yet, right? So maybe it makes sense to wait until it's part of the GATH code base, like it's actually merged into GATH, I don't know what the status is and then set a transaction type. And I assume we can kind of figure out async what we want the transaction number to be. Yeah, because I guess I wouldn't want to slow down the stuff on like the large state test net if like it'll take a while to get it merged in GATH and to then be, yeah, then we need to update the 1559 implementation of GATH and whatnot. Does that make sense? Sure, makes sense. Once GATH goes in, then we can switch it to 2718. Yeah, and I guess we're kind of out of time but the final thing I wanted to see is like when does it make sense to have a follow up call? It feels like we have a lot of like parallel threads. So should we have like breakout rooms for any of them? Does it make sense to just have maybe a call in two weeks instead of a month so that we can follow up async and kind of share updates in two weeks? What if people feel will be like the most productive? I think generally we should actually start planning the road to test nets and to release. Like so we should actually transition to the stage when we plan how to move it to mainnet instead of just analyzing it anymore. It's like overwhelming proof lots of different research cases that show that it's very solid. I mean, probably like this few slightly risky points that were mentioned in the recent report but apart from that it would be great to start planning how to go to mainnet all the way. I still have the roadmap. What's the first target date that we have for the release and how we get there when the clients join? What are the acceptance points like hours from our perspective from all the clients when we say, okay, we are ready and that would be great. Yeah, I agree with you. It seems to me like from a research side it's pretty de-risk. The only two outstanding issues seem to be figuring out this transaction pull sorting which is not rocket science, it just has to be done. And then maybe looking at the update rule but that's also pretty minor. I think with regards to all core devs wailing until like Berlin is out or at least kind of finalized probably makes sense before bringing it up there. So maybe I can definitely work on putting together a roadmap over the next two weeks. Maybe it just makes sense to follow up then to see how the work on the test net is progressing and if we have a solution for the transaction pull stuff and then how do we wanna bring it to all core devs basically after the holidays? Yeah, generally thing that we should totally decouple it from the Berlin conversation will be much, much better for the working group because I still made as bad as like 10% chance that this will happen before Berlin. Oh, got it. Okay. One last maybe a little aspect that I wanted to mention there is I think it might also make sense to start talking a little bit about like general timeline for like Ethereum mainnet because I think like starting maybe a year from now or something there'll be like a lot of these big changes with like the merge and maybe status and so on and we get like a some feeling, right? Because I would really hope that one of nine might be able to just go in maybe like summer slash autumn or something so that we can steer clear of all of those because otherwise it might be a delay of over a year additionally just because why all these high approach things. I agree, that was always my goal is to get 1559 ship before stateless because otherwise having the two kind of come in at the same time is pretty bad. Yeah, but then now also with the accelerated merge time that might also be similar to my time, yeah. I agree. So I guess, yeah, sorry, we're already a bit over time because are people fine having another call in two weeks and doing stuff async until then and using that call maybe to do a bit more of the planning at least I can share our first draft of the planning of like what I think makes sense to bring to our core devs and we can also follow up on the various kind of transaction pool and other issues. Okay, I'll take this as a yes. Thanks a lot everybody, this was great. I'll try to upload it to YouTube later today. Thanks. Bye. Thanks team. Thank you. Thank you everyone so much. Bye.