 Yes. Yeah. Okay. So we are recording. Thanks everybody for covering this call to talk about the gas API and how 1559 affects it. If we have time, we can cover any other questions, concerns that folks here have. So Trent already shared the agenda in the in the in the chat here. And basically, the main I think topic, or yeah, the main topic of discussion today is, you know, what do we do to return the priority fee in the JSON RPC API? There was already some discussion about that on the issue. And before that, though, I think Trent you put in the agenda the presentation by gas API providers. So I don't know if there's folks here who've actually, you know, prototyped or looked at what, you know, like a gas price oracle can look like post 1559. But if anybody wants to share that, it's usually pretty helpful to just start off with looking at something. Otherwise, we can go right into the API. Yeah, I see. There's some ether scam people here. Or if anybody else wants to just jump in, go ahead. Whoever was just speaking, feel free to just speak. Yeah. Hi, I'm from the get team. And well, I can I can I can talk about what we have as a gas price oracle. Now, if someone's not familiar with that already, or does that does already everyone know that? Should I think it would be pretty about like it was at least valuable for me yesterday and the day before to understand it better. So yeah, I think walking through what you have now and how it's changing under 15. So I know that like you and Peter posted some some comments as well. But yeah, just to make sure we're all on the same page. Yeah, so okay, I won't go into like very fine details, but it's pretty simple, actually. So what we had for a very long time, like for regular transactions was that basically, we took the past. I don't know how many blocks, well, actually it depended on whether you were running a full node or a light node, because if you were a full node, the gas price oracle took the last 20 blocks, quite a lot. But if you were running a light line, then maybe two and maybe now the letter will be better. But and what it did is it took the few smallest gas priced transactions and basically found the not the median but slightly below that. So if we put them in descending order, then maybe I think 60th percentile or something like that, and just return that as as a suggestion. And, and yeah, so what we current we are currently planning, at least the latest, like kind of team consensus is that we are going to keep this mechanism and use it for, I mean, we feed the effective minor reverse into it. So so that's what it will actually use. And, and this will be a suggestion for the tip for the priority max priority fee. And for the fee cap for the for the max fee per gas, we suggest this tip plus twice the current base fee. And yeah, it's still a good question, how many blocks we should take and and it might depend on certain situations. So I also had this proposal that I just posted like this morning, that maybe we could just so it might depend on whether there's a congestion right now or not. So we could we could like iterate through the recent blocks and and like offer different priority fees, depending on how urgent it is for you. And maybe this could be also like a nice signal for to for the users to see if there's a congestion or not. I can pick up the link and but it's it's it's in the in the 1559 fee market, that channel. So yeah, and but basically we are going to. So this is what we want to do. We want to just use this, this take take take the minimum or close to minimum tips of recent blocks and offer something below the median. Yeah, so that's what we want. Yeah, thanks for sharing. So and yeah, again, on the on the issue, I think the main concern about the current geth implementation is if if there's a spike in usage, those will likely be short lived. And yeah, the 20 block is almost remembering like too much like looking at too much history. Whereas under 1559, you know, things will probably happen much quicker. Like if there's a spike, it's likely that it's going to be something on the order of less than 10 blocks. And and if you're looking back at 20, you might have users overpace likely. So sure about that. Actually, if there's a spike, the spike is short lived. So if you if you take the recent block, so if you like accommodate yourself to the spike, then you will pay a lot and get in earlier. And if you take longer, the longer history, then you will find a tip that has worked like in the past, usually. And then what will happen is that you will wait out like the spike and and get in somewhere at the descending edge of the spike. So I'm not sure. Interesting. Yeah. Yeah, that's right. So it depends on what you want, how urgent you want to get in your transition. Yeah. Yeah. So basically this, if I'm understanding correctly, the API would work kind of pretty well. Obviously, if there's no spike at all, like if the blocks are pretty constant, it would also work pretty well. If there has been a spike in the last 20 blocks, but it's kind of over. And it would it would probably fail in, you know, there is a spike happening right now. And, you know, yeah, then that that means you send your transaction and it just kind of has to wait until the spike is cleared to be to be included again. Is that roughly right? Well, yeah, if we use like a constant setting, then I mean, constant setting for how many blocks we look back, then yes, that's right. And yeah, thanks for linking my proposal. So I think I think that kind of addresses this, but yeah, so this is just like putting up ideas right now. But okay, so that's what we have now. Yes. Got it. Micah, your hand is up. So I just want to reiterate my broken recordness. Most people here probably know what I'm going to say. But I say again, for the new audience, I'm generally against any sort of priority fee estimation. That's not just what do we believe the miners men value is. The reason for this is because it's kind of self reinforcing getting people into these auction and bidding wars. And in most cases, it's probably unnecessary. And in the cases that are remaining, it often can just hurt the user as much as it helps them. And so I think it's much better that most of our oracles are writing unless we're writing oracles specifically for like, very advanced users like, you know, bought authors and stuff like that, which I don't think any of us are. I really think that for the premium, we should just be saying, Hey, we know that miners will accept a premium of one or two or three or whatever. And that's unlikely to be changing. And so this is what you need to set the premium to. And that's it. Like I do not think we should be incentivizing or incentivizing, encouraging and helping people get into these gas auctions, because they're just they're going to get themselves hurt, like, things are going to go wrong. Like it's just for the end user, it doesn't I don't think it really improves anything in a significant way. And it's a lot of work and a lot of complexity. And I know you have to expose this in you eyes. And it's just just a huge headache that I really don't think is going to help us down the road. Yeah, I kind of agree. But what so so this is so this is why I'm saying that sometimes it's it makes sense to look like more into the past. And okay, this is like the minimum that has ever worked and suggest that. But I mean, so you are talking about using a constant basically, and enterprise is changing. So basically, I don't know, minor preferences, the technology, a lot of things can change. So these, if these minor settings do change, how will users notice that if we don't look for the facts like how like, like the actual included transactions? Yeah, so I think the we do need to have it to be dynamic. But that dynamicism should be over like really long time scales, like we don't, I don't Yeah, I want to be cautious here because it is possible that there is a little bit of incentive for miners to actually have dynamic base fee or sorry, dynamic premium or priority fee pricing based on current MEV rewards. This is really complex, really hard to do, but it is possible and theoretically rational. As I want to be cautious with my words here, but at the same time, I also think that it's probably unlikely we're gonna see miners do this anytime soon because it's a lot of work and the games are pretty minor compared to the other engineering tasks they could be doing. And so I think that we can look kind of longitudinally and say, you know, the clients that are out there like guess that miners, we think miners are using have just like a command line option for set your minimum priority fee. And we believe most miners are just setting a minimum to something. And we have seen, you know, over the last 10,000 blocks, 95% of the miners have been below two. And so or have mind a block with a transaction below two. So set your base or set your priority fee to two. And I want to be careful to not get into this, not trying to be too dynamic, not trying to adjust hyper fast to what we think miners might be changing. Because most of the time when that changes, it's just due to a very short term congestion spike and does not last. And so I do think it should be dynamic. We shouldn't just hard. Yeah, I really want to be careful. Can I go next? Yes. Yeah, go ahead. Right. So I do think also the value probably needs to be dynamic. But the issue we've looking at, let's say past records of what people have been bidding is that we might be too slow to actually catch that the spikes are happening, in which case, while the spike spike is happening, you're still recommending the minimum tip to users. And at the end, when the spike is over, you you indicator would still be kind of trailing these high values. And it might not be that useful. But we do have an objective source that we get for free from 1559 itself. Like we don't need to look at what users are doing. We can simply look at how full the blocks are, or maybe like the two or three recent blocks. And if we see that two or three blocks in sequence, or even the previous block was full, then we know that we're in one of these spike regimes. And we don't need to wait to see users increasing their tips because they might not do that first by themselves. They might rely on wallets which would do that for them. And second, even if we wait for this with the parameters that are set, looking back 20 blocks and looking at the percentile, it's not clear that you would catch immediately that the spike is happening. And you can really do get it quickly enough by looking at the at the gas usage in the block itself. So this is kind of what I was advocating for. And I understand that it might be very different from the current paradigm. And then there's a bit more implementation complexity. But this is where I stand on the API. So do you suggest that we should react quickly to the spikes with the recommendations in the end? Well, I think if you're going to react at all, so Mika recommends not reacting at all. And that's definitely a valuable position. But I do think that it might be valuable for users to have at least some kind of indication that something is going on. So if you do want this indication, I think relying on the gas used by the previous block or the previous two or three blocks would be more accurate than relying on more subjective price points such as what the users are currently doing. Yeah. Well, yeah, so this is why I propose that we should like return a series of suggestions depending on how urgent it is. And yeah, so the users could decide whether they want to like fight for priority or not. And yeah, it's also good to see whether there's actually had something happening right now. But yeah, always suggesting like to jump on the spikes. I don't think that's a good idea, offering it as an option that that might be good. Right, I think returning series of prices like options, it might be OK, but but I would still use the gas used as a metric to check that something is happening rather than user prices. Because then you will have like this kind of self-reinforcing behavior that I think Mika is worried about and so am I. So yeah, Mika. Yeah, so just to reinforce the Barnaby says if we are going to do reactive gas pricing to congestion, we should definitely use the fullness of previous blocks to identify congestion. Similarly, when we're trying to determine what like the 95th percentile minimum is, if we decide to go with that, we should use that same block fullness to filter out minimums. So like if we're trying to figure out, OK, what what do we think 95 percent of miners have set them into? We should first filter out any blocks that were full or sorry, any blocks? Yeah, so any box or full filter those out and don't count them at all to get those numbers. That way we are seeing just the minimums. We're not seeing the congestion times separately. The thing to keep in mind, I think with this debate of should we be reactive or not is that if everyone is reactive, it turns into a pathological scenario where kind of everyone ends up paying more. Like the the reactiveness is useful as an as an advantage over competition. And so if you have one user competing against another user, the one that reacts wins. If you are building an ecosystem, all your users presumably are, you know, approximately equal like you want to serve them all, in which case if you build in tooling for everybody competing using the same strategy, you just end up paying miners unnecessarily. And so if we do introduce these strategies kind of at a very core layer like in Gath, for example, we need to make sure that they're introduced in a way that most people don't use them. Like I know it sounds weird to introduce feature that we don't want people to use, but if we introduce them in a way that everybody uses them, then they become not useful anymore. Like they no longer serve a purpose. We very much need to introduce this. And one way to achieve that is by having like this concept of transaction priority, like kind of fast medium slow or whatever, where the fast is saying, yes, I want to be reactive and the slow is saying, no, I don't want to react. One caveat with that though is that I'm worried that compared to the base fee, you know, if the base fee is a hundred and the fast medium slow is like two, one, two and three, like everybody will always choose fast. And now we're back in the same situation where everybody is choosing fast at which point it is no longer helping anybody because everybody's following the same strategy. Like in order for this to work, we need people to be following different strategies. If everybody follows the same strategy, the strategy stops working. This is very common in game theory. And so just keep that in mind that we need ways to make sure people are not following the same strategy. So one question I wanted to ask is a bit tangential to the discussion is that we're kind of trying to solve the whole gas price suggestion problem before we actually see how the network behaves. And so my personal sense would be that the current model that gets implemented is essentially just continuing the old algorithm. And I completely agree that this might be completely unsuitable for certain tasks or certain scenarios, but it kind of worked until now. So wouldn't it be kind of prudent to wait until we may not actually work so well and see how the base fee fluctuates and how it fluctuates before we try to solve this problem? So kind of the only thing I'm afraid of is that we're coming up with a solution to the wrong problem because we don't know what the problem is until the report. Yeah, but the problem might depend on what we offer as a default option. So yeah, I kind of agree with you. But yeah, we should also keep in mind that what we will see in practice that depends on what we offer as a default option now. Yeah, yeah, of course. But essentially, if we continue our current algorithm, then at least we know how wrong it is. Whereas for example, Micah had a really nice example, that if the base fee is 100 and the tips are 1, 2, or 3, then it doesn't really matter. And this is exactly the problem. We don't know how the tip will fluctuate in comparison with the base fee. So that's why I'm saying it's not super easy to solve the problem. At least for me, it's not super clear what the exact problem is or really. I thought Greg, you had a comment and I think you put your hand down. Yeah, I mean, for me, it was just kind of just coming back on me, but he kind of answered it. The big one for me is that, you know, I personally, I've like personally believed like I would rather people polling the nodes to figure out a gas price than a third party API. And in that case, like we're always going to have to be competitive to some degree. So if, you know, you kind of have to go back down to like, there has to be some level of competitiveness there. Obviously, the issue being naturally that like we're going to run to the same same problem we have now where everybody just competing for astronomically high prices is a problem. But in the case where, like, you know, we have products that we use, if we don't, we try using the node and we actually had to switch off of death and open a theorem just like we couldn't rely on the node for the gas price anymore. And now we're using a third party, which is not what I want to be doing. Right. So like, I think we have to do some sort of competitiveness. I'm like, like you said, you just have to be careful. But I kind of do agree with Peter in the sense that, you know, is there something simplistic we can do and just see how it ends up playing out in the real world? So I think there, I think there is a simple thing we can do that has a good chance of working for launch and then we can reevaluate once you have more data. And that is to encourage the client devs to have a hard-coded default for the priority fee, min priority fee that miners use and a hard-coded default for the priority fee that gets returned if you ask for a gas price recommendation and make sure those two are the same thing. Both of them can be overridden by command line parameters or whatever. But the idea here is, is that by default if all the miners is run stock guess and all the users run stock guess, then everything will just work. Like the min priority fee that miners are accepting is exactly the same as the priority fee that users are using. And everything gets through with an exception for drain congestion at which point we get good data on how congestion happens and what goes on there. And then like a week or two later, we can start making alternative recommendations and then the next patch of guess can maybe include something more smart. But if we can get all the clients to kind of just agree that, hey, ours, our miners will do this that by default is the min and our users will get this as the min, then I think we have something that can work out of the gate. And my guess is, is that most miners are probably gonna run stock guess out of the gate and similarly watch and see before they crank up their numbers. And so we can set that to one, we can set that to two, maybe we say it to five. We believe that one or two is probably the right number, but you said it to five just because around launch that will probably be inconsequential compared to the base fee anyways. And so, people will mine five and it means that it's less likely that miners are going to manually adjust that. Again, that requires all the clients kind of agreeing or kind of agreeing, hey, this is our launch numbers just to fill things out. But I think it's really simple and it gets us to a point where we have more data. So the counter argument to that would be that currently the gas prices fluctuate. I mean, I have no idea what it is currently less a couple of days ago, it was around 30 a week before that it was around the hundred. So that you have a quite large fluctuation with me, which means that the node has to fluctuate along with the gas price. Otherwise, the transaction you make will never get included. Oh, I see. You're saying the issue here is specifically for the ETH underscore gas price needs to work for legacy transactions. No, I'm talking about internal though. That one, if you want to submit a transaction by a gap, then your assumption is that the transaction will go through reasonably fast. Now, if gap will always tell you that the tip is two gigaway and the base fee is whatever, then probably when others are paying a hundred gigaway for the tip, I mean, good luck with your two gigaway. Yeah, and I think so this is, yeah, the failure mode of hard, basically hard coding, the base fee works only when there's not a spike, right? So what's your, the trade-off you're saying there is like, you won't, you're guaranteed to like not overpay when there's not a spike, but if there is a spike, you'll be way underpriced and then you need some other way to estimate what the right base, what the right priority fee is. Yeah, exactly. And the caveat there is that we expect spikes to be both rare and short lived. And so for users that are just using the default, they will probably still get through. Like as long as you're setting like base fee times two or whatever, like it's common when people talk about, you'll probably get through in almost all cases. Like it just might take you until at the end of the spike and the spikes like, you know, seven blocks or whatever. And not just the spikes that you'll get in. There's two cases where you won't get in. It's one, if there's a spike and two, if there's a high value MEV transaction, and this is why setting a constant is a bit harder. Barnaby has made some graphs about this, but basically if a block has a really high MEV transaction, the opportunity cost of being uncalled is quite high. So it's kind of unlikely to include anything with this kind of hard-coded tip. So I think when I last looked last week, if you hard-coded tip of two, then I think it gets you something like 75% of the blocks with MEV, it still makes sense to include those transactions. If you have a tip of, and the top 25% probably just won't include transactions with low tip. So that's the other case where you're just kind of selling it out. I think right now, last time I checked, there's about 35, 40% of blocks that have MEV. So that means, you know, statistically, like if you're really unlucky, you send your transaction, the block has a ton of MEV in it, but then the block after probably doesn't have a ton and you get into that block. But yeah, it is a case where like, and I don't think the current gas price oracle can really pick it up. Like it'll probably pick up, what's the sort of longer an average and looking at it right now, it would be like too great, would compensate for the uncle risk accounting for like something like the 75% of MEV. But yeah, you're not gonna be included in those blocks where there's like a 10 ETH front-running opportunity. Yeah, with the, again, my caveat here is that we should do this as a launch thing with plans to change it in the future. And the reason I think this is fine is because I think you just broke up. Are going to all of a sudden have. We missed the reason you think this is fine. Can you hear me now? Am I still bad? You're good. Okay, so the reason I think this is fine is because on launch day, I find it very unlikely that all the miners are going to all of a sudden had a super advanced pricing, gas, min pricing strategies already coded into a patch for GEPH or whatever miner they're running, even without having any data on 155. Just like us, like, remember, miners are going through this exact same process as we are where they have no data. They have no idea how things are gonna work out in the wild. They don't have the GEPH code to work on. Yes, they can even start their patch until after we get our release candidates out. And so like, if we just plan on having this, like this is our kind of launch thing to gain more data and we know in a couple of weeks we'll change it or in a month we'll change it, I think that's safe. Like I don't think we have to worry too much about like a large percentage of miners having hyper advanced gas pricing strategies on launch day. So I wanted to bring up a point which is like on launch day on the day of the fork, most people, most clients who are sending transactions are probably gonna continue sending legacy transactions until like the market is stabilized or they'll gradually roll that out or something. And those folks are gonna, many of them still rely on the ETH gas price API. And assuming that still exists and at least is backwards compatible and continues to return the same implementation for legacy transactions, that means that folks are essentially going to be, like the majority of the market is going to be sending legacy transactions with max fee set to, and max fee and max priority fees set to the same thing, which I believe means that the majority, like unless we are committed to like breaking ETH gas price and getting rid of that API altogether, we are de facto like clients, death is gonna be de facto making pricing recommendations anyways. Is that correct? Yes, that's correct. Almost. So it doesn't really matter what you have these ETH gas price or ETH not gas price, 10 points, because legacy transactions still only have one gas price fields, which I think that's a little bit different. I mean, I can build fees. So as long as you're sending a legacy transaction doesn't matter how you estimate the gas price, it's still going to burn a lot. Right. I guess what I'm saying is, yeah, what I'm saying is. You're just good. Oh, go ahead, Michael. So I think I see a difference here. I think Peter is talking about people who send their transactions unsigned to geth and then geth fills them in and signs them and submits them. I think Yuga and other people are talking about people who ask geth for the gas price and then they fill out their own transaction in a script or an external service, sign it and then give that to geth to submit to the chain. I was talking about the second thing. So if you just ask the geth to sign the transaction, we will never sign your legacy transaction. So geth will always be called to 1559 transaction. I was referring to the eGas price on you when you sign it, when you sign it outside of geth. So to say you just ask for the gas price and create a legacy transaction yourself. In that case, I mean, both the tip and the fee will be the same. I think that wasn't that the initial problem that everybody will be using the old legacy transactions. So maybe I don't understand. What do you, will the return value of eith underscore gas price change with the fork? It will be the same as before. There's a single value, right? Why? A single number. And that single number will be a combination of base fee times two or whatever, plus some parity fee recommendation. Priority, so it will be the priority fee plus one base fee. Okay, so it will be priority fee. So essentially that would be retaining the current behavior. People who want to sign a non-legacy transaction would probably want to use a new endpoint that gets a base fee separately and it's priority fee separately. Yes, for that, we didn't, so there was, I think, last time I made the PR to the somethings, whatever about the new RPC endpoint. So we got to introduce that. So we do have the, I don't know what it's called, whatever it is in the IP, it's called that endpoints to actually return just the tip. And then we have a separate endpoint. So if you want to submit if you think it's nine transactions, we do have a separate endpoint to specifically give you a tip. And we do not have an endpoint to give you a pick up because it's, so if you don't specify to adjust the defaults to the tip plus two base fees, if you want more control, you can specify it. It's very reliable. So the hard thing to do is estimate the tip. So that's why we do support an API for that. Okay, I gotcha. That clears things up for me, thank you. Yuga, I'm sorry for interrupting, I was wrong. No, no, no worries at all. I mean, I guess the only point I'm making is basically it's clear that clients are, like ETH clients are going to make recommendations. There's no way around that, right? Because there are many, many people who rely on these APIs, on the ETH gas price API specifically. So we are, you know, the community is de facto making a recommendation about how to price 1559 transactions because, you know, legacy transactions can be interpreted as 1559 transactions. So like that chip has essentially sailed, I think. So the only question is, what is the type of recommendation we make? I hear about that. Again, you made a nice point there. And then the problem here is that we don't just dial this which over to 1559, rather we will have a mix and match for what's more initially most of the transactions will keep being legacy transactions. So people have an expectation of how legacy transactions work, how they are priced, how they compete with them. So I don't think we can really break that expectation. And then if you have a network with 90% legacy transactions, then you need to create your 1559 transactions in a way that they can actually compete with the legacy transactions. Because if the legacy transactions are being, connects the chip, then it doesn't matter how nice algorithm you come up with for the 1559 transactions, they won't get included because they were just always on the price compared to the legacy transactions. So this was kind of my, where I was coming at is that I don't think it's advisable to break the current workflow for legacy transactions because we have projects, wallets and everything that kind of rely on it with all these quirks and uglynesses and some of the other thing. So I don't think it's advisable to break that. And if you don't want to break that, then our hands are kind of limited into how we can implement estimations for 1559. But this is a problem, I don't have a solution. So I guess one thing I'd be curious to hear kind of people's thoughts on is Greg, you kind of mentioned earlier, you see it as like a bad thing to query like a third party service to get more precise gas price estimates. At the same time, it kind of feels like a separation of concern issues like where guess like main functionality is not to be like a gas price oracle, right? It's to be a node and to submit some reasonable estimate for the gas price. And it does feel like 1559 has like a much broader design space for gas price oracle. So I'm curious like what people feel, like if get has like this good enough kind of backwards compatible solution that's like not optimal in all cases, does it make sense to have folks like eat gas station, gas now and whatnot be the ones who kind of come up with like fancy or APIs that do look at the block history that do help with this use case. Like I guess I know more granular if you want like use cases. I don't know if people have thoughts on that. Rick, I see your hand is up. Hi, hi. Yeah, I mean for me personally, I feel like geth is the best place to put an oracle because everything already kind of needs it and I mean it itself needs it but it's like a point I can kind of trust. If a person's trusting in fear, they're going to continue trusting in fear. It's weird that, you know, in order to do anything I trust in fear and now some like get gas price, something, something, especially when all the data is sitting in memory, in geth, it has to for other purposes anyways. And so that's kind of my hope is that, I mean at some point I saw somebody else recommended as well as even like a histogram or something of gas prices but it seems like there should be some way to like bubble up information in a call that they can be used by a more clever oracle even if geth doesn't want to be the final call, if they can bubble up enough information that's sitting there literally in memory, it doesn't have to hit the disk or anything in my mind. So that's kind of my take on that. Like in ethers, when you connect to something, you connect to something. If you call get gas price, it's not going to start then trusting some, some other service for the gas price. So that's my two cents. Got it. Santiago, I see your hand is up as well. Yeah, agree with Rick in that. It would be great if geth could solve 95% of the cases and we're mentioning that we still haven't figured out exactly how to solve the difficult cases like the boat riders or the traders or people who need to get in during a spike. I think that would be the place where we would rely on gas now, on eth gas station or more complex gas price oracles. But for the average user, I would love if geth can provide the whole solution. Peter, I see your hand is up, begin. Yeah, so an interesting question from gas price backing is essentially currently geth provides one API endpoint. Now, even that 1559 will arrive, let's say we will have two API inputs, one for electric transactions and one for 1559 transactions. Now our assumption up until this point is that geth is kind of work operates in this headless mode where there are several app just tells geth to submit this transaction and then geth needs to figure it out. Now, from this perspective, I don't think we can make it much smarter. Now, I think people's role suggestion to maybe have an additional API endpoint, maybe I'm adding the additional part that may be able to provide some more information, but the problem is that yes, maybe we could be smarter and look at various metrics and try to give some options to the user. But for such an API, essentially, you need something in front of geth that can actually show this to the user or make heads or tails of the recommendations or the variation, and then the user or something gets to pick. But I still think that if you just have a DOM program, like mining pool payout that just wants to pay the relevant of how much it costs, then you'll be still in the DOM API, which kind of just works and doesn't give any choice. But we're finally having an additional API endpoint that tries to be a bit smarter and tries to offer up some suggestions. Yeah, the way I imagine my suggestion, yes. So this is why I think it's a good thing if the more flexible thing is a generalization of the default thing, and we should definitely leave the default API. And I also wanted to work more or less the way it did work before, because yeah, it's better to not break things that already exist, yeah. Sir, can we get the confirmation to what Bernabe asked on the chat on what exactly the ETHGAS price API would be returning? Is it going to be base fee plus the gas estimation of max priority fee? So currently, the gas price work, we didn't get, just to accept the fast loss and try to see what was the minimum, minimum I think for each block, what was the minimum three tips actually paid to the miner. And then based on that, I think the currency is based at 60% of time. So essentially it tries to take not the smallest tips within the blocks, but something very close to the smallest tips. Yeah, but I think Bernabe's question was that, what will the old ETHGAS price API recommend? That's what I'm getting. Sorry, sorry. So essentially, I don't think I think Bernabe calculates a recommendation for the tip and then for the old ETHGAS price, we just add the current base fee for that to that tip. And essentially that way, the base fee gets burned and the tip that the miner gets will be more or less what the miners were getting in the previous blocks. So the miner should be happy with that tip. So basically the answer to Bernabe's question is yes, it's correct, what yes. Thank you. Can I ask a quick follow up on that? What is going to be key behavior if it sees a transaction on the mempool with a base fee that's below the current block? Is it going to keep it on the mempool or is it going to drop it? Currently the implementation, actually it was implanted by Joe, is that, okay, you can talk about it. Yeah, just real quickly. Yeah, so I don't want to again go into details, but yes, we do keep a, if there's, so we do keep transactions in the mempool that are currently not includeable if they have a high fee cap because then they will surely become includeable really soon. So what we do is that for most of the pool, we have this, we recalculate the actual miner reward based on the current base fee, the latest base fee and we prioritize transactions based on that, but there's like a little space reserved for those transactions that would like fare very badly in this comparison, but still have a high fee cap or max fee. And therefore they are worth keeping so that because they will be includeable in the next, I don't know, five blocks probably. So yeah. Perfect, thank you. So that's actually about previously. So currently that's transaction pool maintains 4,000 transaction. And with this update that we added another 1,000 transactions whose purpose is to be those transactions which cannot currently be executed because the base fee overflows or underflows or whatever, but otherwise they kind of look good. But as a disclaimer, it is a new mechanism. So we hope all the things will blow up in our faces. And the reason I was asking is because there is the intuition that legacy users who are sending the old format transaction will always be grossly overpaying because they have their max fee equal to the max priority, et cetera. But actually if your API returns the base fee plus an estimation of the max priority fee and if the max priority fee of this legacy user is really large, over time base fee should kind of try and compensate for that. And base fee will sort of match the price levels that these legacy users are sending initially, which means that once that happens the actual priority fee that these legacy users are sending should be pretty small and should be once again close to the minimum that miners would accept. And so legacy users are actually a bit hampered by this because they are recommended prices which are close to base fee, which means that any small fluctuation of points of the base fee means they are priced out. It's not like the current mechanism where, okay, there's room they can still go in, like the base fee is really binding. So I don't think we need to be too worried about these legacy users. And I don't think we need to have necessarily this image that they will be really overpaying all the time. I'd like to ask a follow up question to Peter's comment about mempool structure there. Currently the mempool is divided into two parts of the queued and the pending. Did I understand correctly that there is now going to be a new component to the mempool that contains these high max fee, but not, but the base fee is insufficient for the current block. Yeah, so this is a different division. So queued and pending, that's like per account thing. And it's about the ordering of sequential transactions. But so there's like a big heap for all the, or we had one big heap for all the remote transactions. And yeah, so that was, so that priority heap was for eviction of underpriced, very low priced transactions. And this is what has changed. And this is now that works, that yes, if it falls out from one queue that is based on current minor reward, then it still has a chance to stay in the second queue that is just based on fee cap on max fee. So yeah, this is a new queue. And is this additional queue, this new queue, does that consume additional queue, I'm sorry, sort of slots? Like we have 4,000 now for the... Yes, so we did not want to break the existing situation. So we raised the mempool size slightly. So now we have a 4,000 sorted by current minor reward and an extra 1,000 sorted by fee cap, which is I think affordable. And it's also guaranteed that it will not work any worse than before. At least if the code is not broken or something, yeah. Great, thank you for the clarification. So one slight clarification that I wanted to make or a precision is that the, so currently this queue split isn't really, the split isn't really introducing any new queues rather. What it does is it just changes the eviction algorithm. So previously when the queue was full, I mean, depending you had 4,000 transactions queue.revolution and another one arrived, then that one actually needed to push something out. And then if there was something cheaper, then it pushed that something cheaper out. And with the new algorithm, we have a combination that if that's like a 5,000 transactions, because that's the new limit, then if the new transaction pushes something out pick-wise from the 4,000, then it gets included. If it cannot push something out pick-wise, then it can try to push something out from the worst 1,000 maximal cat-wise. So it's just playing around with the eviction rules, but otherwise structurally, the transaction will remain the same. Any more questions on the gas price article? There was one other thing on the agenda. So I just want to make, we have 10 minutes. So it feels like a natural transition. But I do have one other comment. Oh, go ahead. Yeah. I've seen a lot of people comment that they want to avoid centralized oracles, on which I am 100% on board with. I think the thing to keep in mind is that we need to drop our understanding of the old system and think about the new one. In the old system, in order to build an oracle, you needed to basically monitor the pending pool, have access to large amounts of data and the flow of transactions. It was really complicated. These new oracles should be mostly implementable as just a JavaScript library. Like it'll be like three functions long and you can just copy and paste it into any piece of code. We can have, you know, gists that have them. There'll be githubs that have them, et cetera. You don't need this high frequency data access. The one exception of that is you do need to know what the base fee is. And so I do think the clients should return the base fee for the next block. And you do need to know what that minor estimate is. That one is like a data problem. And so I do think there's value in the clients returning data about that. Once we return those two pieces of data though, everything else should be calculatable with a small JavaScript library. Like you don't need more data than that, like you used to. And so I don't think we need to worry about centralization of oracles like we see with gas now and in Fuhrer or whatever because the oracle is simplified so much that it fits in a library as long as we have the data we need from the clients. And so I would much rather see these endpoints in the clients return that data that we need. And this is what Rick was talking about where we need, there is some data we do need from the clients and we need endpoints to get that like a histogram, for example, of minor priority fees. But once we have that data, like we can have every wallet can use their own library. They have their own little oracle. They can tweak it and tune it. We can have standard ones that we share and whatnot and there won't be centralization. Like we don't need to worry about centralization as long as the data is available. Even if gas doesn't provide any gas price estimator. Yeah, I kind of agree that that is a nice approach just to expose the data. What I want to still highlight is that the basically is exposed already because it's part of the block headers. So, I mean, you can always retrieve the basically of the current block. I mean, if you just retrieve the header you have to basically and you can't see whether the block is full or not. So, you can, if you must calculate the basically for the next block, you could, but I don't think anyone wants to estimate that close to the limit. I think some will, like it'd be nice if we could have just like the end points that just because in order to calculate the base fee for the next block it is kind of complicated and you do need the full transaction list or you at least need the gas used for the block. If you have the gas used for the block then end the base fee from the previous block. It's already there. Oh, okay. So, yeah, that can also be in the library. So, yeah, just the, all you need is the last block then and the histogram of historic stuff, I think. Yeah, I tend to agree that like over time I think because the estimation was like so complicated and now it becomes simpler, like over time it probably makes sense, you know, like wallets can probably write some of it themselves. But I guess I do appreciate that like this is like a transition and you want things to kind of be smooth. So, yeah, I feel like that's probably something like we'll kind of gradually see happen. And maybe one thing I don't know I can follow up on is like how do we actually provide like this kind of just base implementation in JavaScript that like, you know helps you do a good estimation and shows people that like, yeah it's not rocket science and we can do it quite easily. Just because we only have five minutes left though and this is kind of related to the same topic. A few folks asked about having JSONRPC endpoint for the next blocks base feed. I just wanted to check, I guess both from the people here and I don't know the get team like how valuable and easy that is because it is like, it is easy to calculate in a way but it's also like, you know you do need to like actually look at the spec from 1559. So it feels like it's maybe something that the client could do pretty easily and that like third party libraries will have to fiddle a lot to get working. Yeah, so I'm curious what are people's thoughts about like, I don't know kind of like if get base fee for like the next block. That's pretty true, depending on Oh, already? So you and it'll basically just look at the yeah, block gas use and calculate the base for the next block. So in order for us to construct a pending block we need the base fee for a pending block. Oh, yeah. So you can just retrieve the pending block and then we have the base fee. Okay, does that work for people here? So you get the block with the pending tag and get the base fee per gas from there. That's the answer. Okay. Yeah, so that would already expose it. I mean, if that's not enough we can consider exposing it on the API, but I don't know. Isn't that enough? I would be happy to work with Rick Mu to just make sure that ethers.js has a calculate base fee from the pending block, the latest blocks base fee. I think it's simple enough that, you know just once JavaScript has it, you can just copy that and whatever your language choices shouldn't be too hard. It's already exists in Python. Yeah. I mean, I currently what I've been doing in my current implementation of EIP 1559 is I actually just grab, I get block negative one and take the black, the base fee of that. My one concern with, is this get pending block? Is that new or is that something exactly like 1559? Because part of ethers is right now detecting whether or not the network supports EIP 1559 by checking the previous block if there is a base fee on it. So pending's been around for a while. Kavya there, not all clients return the same thing for pending. So for ethers, I recommend being careful of using that endpoint just because it's not consistent across clients. Right. I mean, I can't even imagine what the other fields would be. So anyways, yes. Yeah. Neither could the clients. They all imagine something different. Right. Yes, sir. So pending block has been part of Ethereum since forever. So actually it's since forever. But get blocked by hash or get blocked by block tag pending? Not by number. If you were getting number, you were retrieving block minus one. I mean, that's an ethers thing. If you pass in a negative number of ethers, it gets the current, it uses the most recent block numbers to track it for you. Yeah. So if you get at least in gap, if you get minus two, that's the pending block. But I don't know if you can actually pass it. So if you just, okay, let me just check which endpoint. If you get blocked by number, pass the string pending as the one parameter and you'll get it. I'll try that out. It's the same thing as if you would pass the word latest in for that, just in the same spot. Then you might need a Boolean as well for whether you want the receipts or the transactions. So the only downside I see to getting the next base fee using the pending block is that you get a lot of unnecessary data, but it works from our use case. So I'm okay with that. Well, so if you, I mean, define a lot on this, I mean, sure, the base fee is probably five bytes and the header is 500. So yeah, I mean, from that perspective, yes, you do waste a lot of data. The question is if that's too much or isn't it? It's a valid question. So I'm not saying we should not add a gap base fee. I'm just saying that we can do it currently too. So it might be worthwhile to see how people use it and then add the endpoint that's actually needed. We have two minutes left. Any other quick concern that people had? They wanted to bring up? I just had a quick comment or if I'm not sure what the plan is after this, but I was struggling to follow along in some parts. And so if someone could give me a summary of, like it sounded like there are gonna be certain phases, there is still a little bit of debate of exactly what the GEPH client will be providing. And it sounds like also that the gas station APIs will also be providing some fancy, extra fancy features potentially or not as a wallet, we would still rather prefer to be able to get information easily and digestibly with rich content from an API. If that's possible from GEPH, but without it, we don't wanna have to constantly be polling on each of our clients for the last X number of blocks. So it'd be great if both were provided from an API standpoint as well as from the clients directly. But yeah, if you could summarize what the different phases are for rolling out, that'd be great. Sure, so right now or? Oh, no, so it doesn't have to be right now. It can be like in a summary after the meeting just to make sure that we kind of understand what's happening. Yeah, and I think it's still kind of influx, but I'll try to get that, yeah. And I'll share it on the Discord, yeah. So one thing that before we close speaking up, I think Michael mentioned that it would be beneficial for GEPH or if you're a client in general to expose certain past historical, I don't know, histograms of who's been paying how much or which minor, if we can. So I think providing a gas oracle that works on these is kind of hard, I mean, for GEPH because it's an API that we cannot just change afterwards because if somebody relies on it we're shooting them a network. However, if it's an API that just provides data that others can build upon that, I mean, that can remain stable. So if we just provide an API that returns an histogram of priority fees paid, I mean, at worst, nobody's going to use it, but we don't need to change the API, it cannot be wrong. So I think that might be actually a really good idea to expose this information than anyone can build a gas oracle on top if they want something custom. And if something turns out to be nice and something turns out to be stable then we can also ship that within GEPH. And the reason I'm saying is that if we can figure out a reasonable data retrieval to expose from GEPH, then I think it would be nice to add it. But that one kind of needs an idea to start out because ideally you want to have the same data from all kinds. So Micah, if you have a suggestion on what data you would like to see, I think you'll also have some Instagram idea approach. Maybe you can add some ideas and... Yeah, this just to keep it quick and tie things up. My recommendation is that, like I said, GEPH returns just some data that data would be and some of this is already returned. So I'm just gonna try to be all inclusive here. The base fee of the latest block, the base fee of the pending block, the fullness of the latest block, the fullness, I guess. Yeah, so the fullness of the latest block, base fee of the latest block, base fee of the pending block and then a histogram of the minimum, the lowest gas price accepted by over the last n blocks with full blocks filtered out. I think that full box filtered out, I think is critical for getting the most useful data here. And I think with that, anyone can build a Oracle, like with that data, you should be able to build most of the types of Oracles I've seen people propose with just like a handful of lines of code in any language. Quick idea, along with the histogram of gas prices, maybe also a histogram of full blocks, if that even makes sense or so I have like, so that some of you know how full blocks are. I think it's definitely useful and interesting data and I can imagine someone wanting to write an Oracle that takes that into consideration, like, oh, we've noticed that, you know, there's a lot of volatility and block fullness lately and so we're going to change our strategy. And so yeah, so let's add in a stretch goal would be a histogram of block fullness over n blocks. And histogram, maybe the wrong word. I don't know a better word to use for that purpose right now, but. Yeah, let me get the. Yeah, yeah. Yeah, so I think it would be super nice if we could just write up a small brain dump of what we would like to see. And then we can see what would be how we could expose the whole thing. Because I guess gathering all that data and exposing it is not particularly complicated. So it's just more like figuring out what the actual data we want to expose it and we want for it. Tim, if you give me a place to put stuff, I can start it off and then let people modify from there. Yeah. Okay, sure. I'll do that. I'll send you something. I'll post it in the 1529 Femarket channel in Discord. If folks want to comment there, yeah, that would be really valuable. So yeah, I'll put together like a hack and be or something that anyone can edit. Yeah. Great. Yeah, this was pretty helpful. And I suspect, you know, we'll probably have another one of these calls in like a few weeks. And once we actually have 559 on the testnet, it might also make things a bit more concrete. In the meantime, if you do want to just like play in a very experimental way, which is in 59, we do have a DevNet called Calaveras that's up. So that's running, there's a spec for it in the GitHub specs repo. Let me just link it here in the chat if anybody wants to check it out. There's, you know, very basic like RPC support and whatnot, but it allows you to send the transactions. And if you have your own tooling to kind of play with them, yeah, that can be useful. Just to mention it, if you download Gats master build, it also has the flag for joining this Calaveras testnet. So you can, with an unstable build of Gats, you can join it and you can play with it. I actually had a quick question about that as well. Is there like by-call had somebody else running an RPC node we could just connect to and also an Explorer? Will that be added to the new card? Calaveras, the Explorer is there already? I don't know about the RPC node. Okay. Yeah, the Explorer is linked in the spec and there's an each stats in the faucet as well. Okay, last quick question. What's the parameter to sync depth with? Oh, perfect. Yeah, yeah, and they answered it in the chat. Yeah, cool. Okay, well, yeah, thanks everybody. And yeah, talk to you all or at least part of you in the next, in the coming weeks. Thanks everybody for joining. We'll send out an email with the link to the recording and notes if there are or a summary document. All right, bye. Thank you. Thank you. Bye-bye.