 Okay, so thanks everyone for joining this post London infrastructure call. Yeah, so I guess the goal here is mostly just to discuss kind of you know what different people have seen since London has gone live. How you know we can adjust I know that like, there's been a lot of conversations happening around like, you know how do we handle the new the new few mechanism and how do we make sure that, you know, we're providing like a good user experience to people. So, I guess, yeah. The first hand agenda was like an overview of the upgrade I mean, just do this quickly, at least on the consensus level side everything worked kind of as expected so you know we don't expect to do any changes to kind of the core protocol regarding the fee market. We didn't see anything kind of go wrong or go different than what we would have expected from the test nets and from the just the simulations that we've done in the past. Yeah, I guess I'm curious to hear just like from others on this call what they've seen and it can be useful if you know if you share something to just kind of share what what product or at the very least what type of product you're working on I think you know wallets that we've had a lot of interesting experiences but yeah just curious to hear from everybody what what they've seen and if there's any issues that they think are important to that we should we should bring up. I guess I'll go first. My name is Austin Bunson I'm the co founder of quick note we provide blockchain infrastructure to companies. Open Ethereum, and this is probably very very unrelated to the London hard fork but maybe just in case I'm going to mention it we are noticing that we're seeing a lot of dropped peers after upgrading to the version of open Ethereum that supports London. Again, probably unrelated but just throwing it out there in case it is pertinent. Good to know. I'm not sure what's this. So I know open Ethereum was being deprecated after basically after London. I assume, like they still have people looking at the repo help. Yeah, so is there is there a specific issue actually that you've opened or anything yet, or that you've seen on the repo. Now yeah we've been playing with configs and trying to figure it out. Yeah, if you do not figure it out I think yeah if you can share just the issue like in the all core devs chat. Once you have that that's really helpful. I know a lot of the basically a lot of the open Ethereum team has migrated to Ergon. So the devs are still kind of working in the ecosystem and that's probably something they should look at if if it needs to be fixed. Thank you. If no one has like anything burning. I saw Barnaby. You were on the agenda as having some data to present about the upgrade so far is that right. Yes, I've got a couple of slides. Sure. Yeah, do you want to go ahead and share that. Sure. Screen. Can you see it. Yes. Yeah, disclaimer. It's been only a week. So it's really early impressions, and I did not get as much time also as I wanted to to dig into much more data but yeah, I hope these impressions kind of maybe help frame some of the discussions after. So, as a kind of look back to a previous conversation we've had on the podcast, I said I would be looking at three things the gas used kind of a dynamics to see when are we in full blocks and maybe first price options kind of come back on the table. The second thing is base fee. What does the, let's say trace of the base fee look like is it is it smooth. Is it more like oscillatory. And then the last part is the oracles and are they doing the job well enough like are they tuned properly. So I'll try to say a couple of things on each point on the gas used. I think here on the longest streak that I found as of yesterday. I don't know if another NFT drop happened in the meantime, but the longest streak streak of full ish blocks I found was around 35 blocks. I took the base fee from, yeah, low 30s to something like over 1500 way. And I'm plotting below here the priority fees of 1559 transactions that were included during that ramp up. I'm plotting the interquantile range. So like the 25th percentile and the 75th percentile of priority fees. I guess if we expected to see like this first price options on the priority fee we would expect some kind of ramp up that would be a bit smoother than it is like here the priority fees really get super high very quickly. My impression is that what happened is legacy transactions are still very much dominating on the network and at this point they were at the time. And so these transactions are sent with a high gas price, because we cast them into these 1559 formats with like full priority fees. The priority fees are high. And so in turn the 1559 transactions, they kind of copy via your recall and they also send high priority fees. I see that Mika has the hand raised today. Yeah, do you want to go ahead. I'm just wondering, so you have a couple of dips on that top graph there. Are those artifacts or was there actually block space available. Yeah, at some point there were like less than full block. So I kind of took the streak to be. Yeah, if you if you don't take, I can't really take 100% because there's always a little bit of space. So I took it as like the moving average was above something like 95%. So there was indeed like maybe one block where the gas use was a bit lower than others. Yes. Okay, so it wasn't like an empty block or something that could have been an artifact of just how mining works. Like it was partially full meaning the miner was including transactions and everything. Correct. Yeah, definitely not empty because this is like the proper size so something like 21 million gas I guess. Okay. Yeah, I've published this dashboard on the dune analytics. So if you want to take a closer look I can send the links after right. Yeah, so maybe we do have FPS but at the moment I guess it's a bit early to to conclude that. Okay, this is what the system will look like when you have a series of full blocks I think a lot of what we'll see might be artifacts of having more legacy transaction in the system. About base fee so people have noticed that it seems to oscillate quite a bit like it's not really smooth because up down up down. Sometimes you have a full block then followed by another empty block. Is that normal so we know that there's definitely like a, let's say, the region of the system where things can happen that make base fee look like this. I take on this at the moment and this is more of a reasoned intuition than actual data analysis is that legacy transactions that are sent by users. They would set the gas price to something that's maybe higher than base fee, but which remains kind of close to base fee because the oracles they use. They gave to give them kind of okay this is the ambient current gas price and so this is what you should set your parameter to so you send a transaction which is close to base fee. Which means that include the ball transactions they actually tend to clump around the current base fee, small upward deviations of base fee then price out a lot of transactions. And if you have then base fee decreasing a little you have suddenly like a lot of transactions which are again include a ball. So you could observe like these sorts of hiccups due to the fact that these legacy transactions they they are not they don't have like a lot of margin to get into the system or not as much margin as the 1559 transactions have with the max fee. Another point is comparing legacy and 1559 transactions. So this was a simulation that we've done maybe a year ago, trying to simulate a mix system where you have both legacy users and 1559 users, exactly the same, let's say, dynamics. But what we observed is actually is not really that legacy users overpay, especially when they send their transaction with a gas price which is kind of close to the base fee. But it's more that they are easily non-includable. And in that case, so these red, purple and brown line, they all represent legacy users while the green, I think green and yellow line, well, the line, the flat line here is 1559 users. So what happens is anytime base fee kind of rises, suddenly legacy users, they get priced out and they have to wait quite a bit. So, so I see, I see kind of two scenarios, either they're included quickly, if the base fee is stable with legacy users, or they're priced out and they need to wait until the base fee comes back down. So my take on this is that they seem to be paying more with their time than with their money. And actually, thanks, Blocknative, I've seen this graph, Perama gave me a heads up that this exists. It was posted in the GAF Discord. It does seem like, so these blue spikes is kind of a pending time to inclusion for legacy transactions. And it does seem to be considerably longer than most of the 1559 transactions. So I think these dynamics are quite interesting, especially still, as we have quite a bit of legacy transaction in the system. All right, so last point, the oracles. This is more of an anecdotal observation, but I was able to send transaction with one way priority fee, and they got in really, really quickly. Even though MetaMask was at the time recommending me to use like minimally four to five way. And this is not to put down MetaMask. I think it was quite smooth experience. But as Tim noticed also in the notes for this call, I'm guessing that the priority fee oracle might be biased a bit upward. And most likely this is due again to the legacy transactions who, when they got in, they might be, even if I said they pay more with their time and with their money, they're still more range for them to overpay on the priority fee. And so perhaps this is biasing the fee history oracle a little. I think if we want to be sure of that, we could kind of take a look at the inclusion delay as a function of a priority fee that was sent. Yeah, that would give us maybe a clearer picture of what's happening. It'd be interesting to know how long can we go with a priority fee and what are the kinds of guarantees that we need on this fee to be to be included this. Yeah, go ahead, Mika. Do we know what formula MetaMask is using to determine that four to five face fee or four to five priority fee. I don't know. Oh, my understanding was that they were using the fee history oracle, but perhaps they are on the call. Yeah, I think I don't know for sure, but I think the four to five is some sort of minimum. Depending on the setting. So I think if I remember correctly is the last call where I think we had discussed sort of like starting minimums and four to five was in the range I think like two three and four something. So if it pulls from the fee history but if it's if it's like one way or something we might be hitting a minimum but I'm not positive it's just my guess. And I don't think it's a minimum I use it earlier today and it was recommended to me like 4.73296 or something like it didn't seem like a hard coded number unless someone had fun with primes. And it's possible we may have updated at this point I'm not sure when this data was pulled from. There might be a minimum but I still believe there is some kind of dynamics here where the priority fee. So few stories looking at the quantize. The quantize might be quite high, because a lot of people are a little overpaying with their priority fee. So that that could also be both a combination of having a hard coded minimum plus oracle itself. If you had a lot of legacy transactions right like these might push, you know, you could have a case where say, only 5% of the block is 1559 transactions those transactions can get in with a one great priority fee but then if you're looking at say the first, you know, this aisle there's not even 10% of the block that's 1559 transactions. So you're pulling the priority fee from a legacy transaction. And that might explain why it's it's it's lower. I thought the fee history endpoint was giving you a percentile of the lowest priority fee included in the last n blocks. Is that not correct. In that case I'm wrong. I'm not sure. I mean, I'm not super confident on a statement so if someone knows better please speak up. It seems. Yeah, it would be interesting to dig into this. Right. So this was for a priority fee. So the second parameter that is kind of relying on Oracle a bit more crudely at the moment is the max fee. So the early guideline that we we set was to say, okay, look at the current base fee multiplied by two, add whatever you were going to propose as priority fee. And this should be good enough. And if we have more data we will make it better. And this is some data courtesy of Perama. Thank you for the data. Which which seems to be saying that. So, okay, what is happening here the what he's been looking at is how long does the transaction stay viable, given that you are multiplying your setting the max fee as a multiple of the current base fee. And two X definitely remains viable for 30 seconds because there's no way base fee can go that high 99% plus of the time it remains viable for a couple of minutes even. But it seems the numbers are fairly close, even for lower multipliers so 1.7 1.5 even 1.3. So in a sense what we seem to say is, maybe the market isn't so unstable that we need to have two X maybe we can use less aggressive max fees than two X default of 1.5 seems fairly reasonable. This is of course, maybe more of a static analysis of if everybody changes their max fee to something else the dynamics could be different. But I do think this is early evidence that yeah two X might be a little high. What was the time frame this data was gathered over was a since launch till now. Yes, I think he posted that two days ago so pretty much most of the blocks until that time. Yeah. And does that does that include that big event you mentioned earlier where we had a bunch of blocks in a row. I think so. Yes, I would check but I believe it does. Yeah. Yeah, even if that spike is not included. Well, these spikes are not so long right so you could still have a spike for it could represent 0.01% of your sample and and you would see if I am right. Yeah. All right. That's about it. Yeah, one thing I wanted to highlight is that I thought was really really interesting and super cool was the many do dashboards there was definitely like a lot of community engagement around looking at the data. I actually learned a lot from conversations on if R&D. A lot of inputs from wireless implementers infrastructure providers. Very super nice. Hopefully this continues. I think we're all like really keen to dig more into this. So yeah, thank you. Thanks for sharing. Yeah, this was great. I guess the other just kind of thing I wanted to make sure we mentioned on the call is we talked about the fee history API. I know there was some issue where like the return type for the oldest block before was in decimal rather than hex and that caused some problems so get released version 1.10.7 yesterday where the return type of the oldest block in the history is now a hex string. So I think a few people had mentioned that this was causing issues and that should be fixed if you if you use the latest release by guest. And I guess that's pretty much, you know, what we have planned for the agenda like I'm happy to leave the rest of the call for just people's concerns or comments or anything you know you all want to discuss. If I jump in here, the only thing that I've noticed is that very low priority fees are getting included. So for example, 0.3. And I'm just wondering if anybody had any thoughts around why that is happening or how that might change because previously we'd spoke about the minimum being for example one or two or as Jake said before possibly 34. So yeah, any ideas around why that is happening. That's interesting. How frequently have you seen like 0.3, like not zero like zero would happen. Like, if you send a transaction directly to a minor but like yeah I haven't seen it kind of between zero and one. Yeah, so people from outside have been testing I don't know if Roman. Yeah. Yeah, so I sent like, I don't know, probably 1015 transactions with 0.2 0.3 way and also sometimes I said max priority fee higher and but transactions would be still included with like 0.3 0.4. Deep. Well, because base fee would be kind of higher than I expected, but this is actually still would be included. Yeah, and do you have like a rough sense for how long they would sit in the transaction pool. Well, I had. Yeah, so sometimes it took like hours, but I had like multiple times when it was like for 10 minutes. That's really cool. Like, I mean, I guess the reason you know we mentioned one or two as an initial base fee is because that's the price that like offsets the uncle risk for minors. And it's kind of like the economically fair price, if you want. Whereas we're if they include transactions on average for one way, and then they get on cold, you know, every so often then they should end up end up net ahead. So, I'd be like I wouldn't want to like see like default go below one way just because I think then we end up in a spot where like, if we're sending transactions which are on average not profitable for minors to include. That might not be great, but I, yeah, I guess, you know, some minors might be willing to pick up those transactions, you know, if there's nothing else in a transaction pool and yeah I guess what we're seeing is kind of like the equivalent of like before if the gas price was 20 and you sent like a transaction with one way or something. And like some minor just decided to pick it up. Yeah. So basically, the best way for me to reduce this was to set like max fee per gas as like base fee, which is like 10th percentile for past 100 blocks, 10th percentile of base fee. Plus, the value which is returned by max, max fee per gas method. And in this case, usually transactions would be included like in 10 minutes or so, and deep would be reduced to some value below one way. Yeah. So, if anyone is interested in. Are you saying, are you saying the effective premium was below one or the premium you've set on the transaction was below one. Usually higher than one way. Okay, so that suggests there might actually be a bug in guess and since then I would not be surprised when I'm screwed it up. My guess is is they're not. They're sorting transactions by the premium on the transaction, not the effective premium, and they're not correctly excluding when that doesn't get met. I would be curious if someone can get through a transaction separately. So, so this should be looked into see if it's a bug in guess. If not, it's probably some minor on a custom fork where this groups in the up. Either way, I would be curious if someone can get through a transaction with less than one as the configured premium on the transaction. That would indicate a different situation. Oh yeah, I can try to say that now. That happened also. I used like 0.2, 0.3 set as max fee progress. And it worked. Max max premium progress. Max tip. Okay. Yeah, so. The video. So in the case of when you set that. So I think there's probably two situations here one, either a bug in guess or a bug in minors with regards to internal transaction sorting and causing them to not correctly do the rational thing. The other one for when the priority fee on the transaction is actually set to lower than one. We saw this long time ago back before there was congestion in Ethereum when you could just throw a transaction out and it would get included eventually because there's always space. There were some minors that mined and transactions that were below like what was profitable for them, like, despite everyone knowing that it was unprofitable. We always assumed they were just like altruistic minors who just, you know, didn't really care about the money just wanted to make Ethereum great and include everybody who wanted to get included. And so you could like do like a point five or point one even transaction and just wait several hours until this one random altruistic minor would show up and include your transaction. It might be something similar, or it could just be one of the minors configured something wrong or they're testing things. Yeah, it's interesting. Yeah, just a good comment in the chat like different minors have also different profitability threshold. And yeah, maybe I don't think I don't think any minor that we know of has uncle risk that's lower than significantly lower than one like point eight. Yeah, the telegram the numbers well back and point eight was like for the for the good miners the ones who are really well connected and had very low uncle rates like point eight was their pressure. Yeah, I'll definitely share this with the get team and make sure that they look into it. Thanks. Yeah, thanks for sharing the two transactions there. Was there. Oh yeah, go ahead. I'm just going to bring up something else and just thinking out loud here based on some conversations with users of the scan like it feels like there's like two different groups of users when it comes to gas prices. Like there are the people who want to get their transaction through in a short or reasonable amount of time for them. And that's the ones who want to have like the current or previous kind of grass Oracle experience. And one of the comments that I've seen in agenda and even for myself like there is another kind of user who would be okay with spending like one or two priority max fee, and then with the max fee of like a few, a few way higher than the current base fee. And it gives them like a, say 90 plus percent probability of getting that transaction through within the next couple minutes. And yeah, I don't know if others have a similar kind of perception that is like, like two different groups, and it kind of like whatever guest Oracle that you want to show. Which which group you're showing for. I'd say I've heard people also mentioned is like a third group, which I think is, is maybe the group that's having the toughest time is people who want to send a transaction with the low fee who don't mind it waiting like for hours in the transaction pool. So it's like you want the person just like the person who wants to be in the next block the person who wants to be in the next five blocks, and the person who wants to be in the next like 24 hours. Yeah, I'm curious what teams are others. Yeah. So the core problem here is that when you try to each you every user has different time preference and so some users have a high time preference some users have low time preference. And when you try to factor that in, you end up with a far more complicated problem, and the UX becomes insane very rapidly. And basically it turns it from a, like being able to ask user question yes or no, versus asking a user to look at a 3D chart and say where are you on this three dimensional graph like in this three dimensional curve where are you, you know in terms of both time preference and financial preference and you know these are multiple variables in this problem. And so the reason I have always lobbied for the defaults for, for wallets to be what I have which is, you know, low priority fee, high max fee is because it simplifies the problem to just a boolean question so just a yes or no for users which is very easy. And it just to kind of assumes that everybody has a time preference of I want in right now or not at all. And the only reason not because we think that is the dominant user, but because that is the easiest user to solve a problem for, and it gives a very comfortable user experience even when they fail so like even gracefully degrades to okay I'll just come back later, which you know for, you can express that to use by saying hey by the way, you know, transaction fees change throughout the day you might want to try again it's something easy to communicate to users where it's very difficult to communicate to users. Hey, here's a three dimensional curve of all the possible things you need to consider if you want to include your transaction. And so yeah so so you're definitely right there are definitely users across the spectrum that have, you know, different time preferences different price preferences, and the, and I definitely encourage wallets to try to think of how you can cater to those different users. I just want want to exercise caution of building like super complicated UIs that users see first, like as long as the first UI they see is the easy one. I think it can work out pretty well. And then for more advanced users, you know, they can use things like we've seen a lot these, you know, ETH gas station and I don't remember what the other one was. I think by BlockNative has one as well that have all sorts of different pieces of information you can see and then you can also look at like base fee over time. And so you can say, okay, I've noticed the base fee usually drops on Sunday at 4pm. The base fee is usually at its lowest. I'm going to wait till Sunday at 4pm and then see where the base fee is and then try it then. But you know, if it's a big token sale going on or an FT sale going on at that time, then I'm going to wait until 5pm. Like this things get really complex really fast. So just a warning that if you go down this path or trying to build a UI for that, it gets really complicated incredibly quickly. Any comments on that? I did not mean to kill the conversation. Yeah, maybe I'm sorry. It's, hi. It's not mentioned over here. I'm a Democrat mystics for the ICMS community. We are utilizing a technology that's called Flashboots. For those who don't know about it, it's directly sending to miners the transactions. We are having a hard time building the UX around the base fee due to the fact that the Flashboots, when you send a transaction, the transaction will be sent and signed, sorry, it will be signed by the user immediately, including the max gas fee. But then this transaction is sent to Flashboots and retried until it's included. And this is what we do. This is the survey that we propose. The issue that we are having is that on the UI and as a UX, we need to estimate the transaction fee to send the screen to the user. And this estimation is to include the base fee. Due to the fact that we are submitting at every single block, let's say that this transaction is not included in block one or block two or block plus three, but maybe block plus 20. That means that we need to increase on display the on display potential estimated base fee that the user may be paying. So we basically need to show the max fee that they may be paying to remain honest about it and to remain transparent. This is a big problem that we have because this on display only shows a potential really high fee when the user is really not going to pay that max fee. This is going to be a very rare case where the increase is going to be at every block for the next 20 blocks, for example. I think, yeah, I think metamask was, I saw a thread by Dan from metamask yesterday kind of covering this. I think they tried to use like the average, you know, like how many blocks on average does it take for your transaction to be included. I don't know, Jake, if you have any thoughts on that. Yeah, I can't, I can't speak to exactly what the estimated is, but it's, it's taking our best guess and then highlighting the estimated number and then we also show the max fee to is like an FYI. So it is, I mean, it's one of the things we struggled with the most in the UI is trying to show that right because you don't want to show a super high fee that they're not actually going to pay most likely, then you also don't want them to be surprised by high fee if you never exposed the max fee. So, yeah, we do our best to guess, you know, we're for estimate what we think the user will pay and then highlight that number and then show the max fee is kind of a secondary number. Yeah, I guess what the issue that we're having is that we are, we are not, I mean, whenever you use Uniswap, for example, you when you when you click on the swap button you will be redirected to the metamask window. You will be able to see the opening and sharing clearly what gas and feed you're going to pay. So users clearly and naturally understand that they see extra fees and network fees. Since we are sending that to fresh books, we can't have that metamask window opening, and we are showing everything in one go in one place and where we are being heard today is that users, they will compare our prices and our feed with Uniswap, for example. And they will not, that will not include any, any base fee or anything that will be shown after in the next display. So, so that's why we're being heard at the moment and this is really something that we're trying to solve by displaying better, expanding better, but also by finding the right way to show a base fee that's not scary, that's not driving our users away. So this is the problem of the person who is sending the transaction is not the same person who is paying for the transaction and currently the transaction types that we support do not support secondary payers like they used to. So it used to be that you could have a different person paying for the transaction versus who signed the transaction, particularly with flash spots because you can use submit a bundle where one person pays one person doesn't. We have talked about this before on having new transaction type that would as a couple options. One is we can make it so the miners can choose to cover the base fee. And it's something we talked about and it almost got included but we withdrew it because we wanted to keep the initial 1559 simpler. I don't think there are any strong arguments against that one. So if people have real use cases like it sounds like you do for making it so miners can pay the base fee. Then what that effectively means is that from the flash box perspective you would submit a bundle where the user transaction had a base fee of zero or sorry, had a max base fee of whatever but that would be covered by the minor. I think that use case work. And the other option is a new transaction type where you have two signatures basically one signature of the person who wants to do something on Ethereum and another signature for the person who is going to be paying for gas. And that also you know there's no strong arguments against it like theoretically it just needs a champion to kind of push it through the process and work out all the details. Both of these things are on the table so I think your situation we can do better in the future. It's just this initial launch didn't have either of those. Yeah, I agree. That will have definitely both of those proposals. I don't think it will solve if you want the users to pay for the feed at the end of the day I don't think that would be solved by those but definitely that will give us room for extending that kind of options and start thinking of another way to make profit and pay for the user base fee and not carry them away like it does today. Anyone else have anything they wanted to share or bring up. If not, I guess one question I had for all the folks out here is, I understand that like 1559 was the first time in a long time we've had such a broadly impacting change to Ethereum that that rippled across across a whole lot of different areas. We do have another one of those changes coming in the next, I don't know, six to nine months depending on how things go with the merge. I'm curious if people here have anything you know that they, they would like to see or they think could help them as we're working on the merge to make the transition smoother and the offer kind of a, you know, the best experience to their users. Yeah, like things either things that we didn't do in London that like you would have wished, or things that you thought were actually quite good, and that we should definitely do again. Yeah, that would be really useful as we're as we're starting to work on this. Personally, I think these calls are great, and trying to bring people from different levels of the stack helps a lot. In terms of what to improve I would say, try to stagger more to give a little bit more time for each layer to implement water they they have to implement so the layers on top, have enough time to adapt. For instance, having get ship fee history or all our required changes just a few weeks before the the merge, really, really difficult to get to it. Yeah, and focusing a lot on making sure that test nets are really representative of what's coming up on mainnet for instance something that beat us. After the merge was that we have tested everything thoroughly on test nets, but the base fee on test net was ridiculously low, since blocks were not fooled usually. And so when we actually got to higher base fee on mainnet some things started failing to do poorly set up. Got it. Yeah, yeah. So you mentioned, you know, like having more time for like, you know, different layers of the stack to adapt and whatnot. What do you think it's like, you know, the right amount of time from when, you know, we have kind of a release of just an RPC to that that people can actually use, you know, the features to like going live on mainnet. Yeah, is it like, you know, one month, two months, three months, hopefully not six months. I would say it depends on the complexity of what we're looking at for something like 1559 I would say, yeah, one, two months, probably two months sounds reasonable. Of course, I'm going to push for as much time as I can have as possible. And that means putting more pressure on Cardiff's and play and no developers so I know that this is there is a tension between like the timeframes for for each part of the stack. Yeah, but yeah, it's helpful to know that the rough kind of estimate that, yeah, of time that you need. Thanks. Yeah, this this really, really valuable. Anyone else. Yeah, Micah go ahead. What's your question. I'm just curious we have a particular group of people here like wallet developers and whatnot. I'm curious what you all think you need to do for the merge. I suspect that like, it's differs greatly from what you actually need to do. And I think now as sooner rather than later is a good time to start clearing that up. But I'm curious, like, what do people think is necessary from all the developers regulated to the merge, or not all developers any third party integrators. Well, on the design side, I haven't even thought about it. So I don't know if that helps. You're more correct that a lot of people have spoken to. In theory, the merge should have relatively little impact on integrations. But I want to start those conversations now to make sure we're not forgetting something. Is everybody in the same boat basically completely haven't thought about it. You just, you know, it's a thing that's sometime in the future. Yeah, this is Jen from rainbow. Yeah, I guess when I think about the merge, at least wallet perspective that kind of relying on the tools underneath me to maybe have to shift a little bit but kind of relying on not too much kind of changing from UX perspective. So, but yeah, also thinking of it as like oh sometime in the future and once gets more real then we'll have these calls again with like the different at the different layers of who needs to change what that's a great point like what maybe what's the point where it starts to feel real for you all right. And I understand it's kind of farther in the future than when the clients start looking at it because you know right now it's not even implemented in places like gas and whatnot. But yeah what what are like the I don't know the signs that'll make it feel real for you. Hey, Bruno from rainbow. Yeah, I just want to say that I think it's like a gradual kind of like a, you know, implementation and like you know first client level, then having a testnet, then wallets can start playing with it and then that's and other people right like in that order. Yeah, without without having testnets, starting to think about stuff like not starting to think but like actually doing any work, it's, you know, it's just like theory right. Yeah. When you say testnet so one one challenge that we've had in the past is like, it's easy to spin up new testnets. You know, like, like a merge testnet for example. And, but because a lot of people actually rely on Gordy Robson and rinkeby, it's, it's a bit harder to like fork them until we're pretty, you know, pretty far into process. How useful is it for like you all, when we have testnet like new testnets, is it something that's like easy for you to integrate and like, you know, you can kind of start prototyping or is it something that's basically useless because if it's not Gordy rinkeby or a Robson, then you just can't really do much because of how your infrastructure is set up. For us, it's not, it doesn't make a difference, I think. Okay. Like, you know, it depends, like, aside from like being a test and a new test or not, it like, it depends on like how many raking changes in like code or like JSON or PC. Yeah, that's that that's what actually breaks or like complicates things and not the testnet itself, you know. Yeah, that's helpful. Um, that's pretty much all I had anything else anybody wanted to bring up. This is a Michigan from Anchorage Digital I am. I know I kind of missed the boat on this but I just wanted to voice support on. I think it was dawn from flashbots that was mentioning, you know, the difficulty predicting the fees that we're displaying for our customers, and perhaps including the new kind of transaction types or whatever the ideas are that we're kind of coming up with for kind of solving the issue of, you know, a long time for what base feed made for changing drastically between when transaction is initiated to submitted so just wanted to kind of reiterate support for that. Got it. I had just a few. I mean, I haven't have to think about it a little bit more but Bartobé, thank you. Thank you for your for your notes and presentation. And I think I'd like to digest kind of what your findings were and especially that kind of the last slide that you had with the, you know, this 2x is equal to, you know, or covers like 100% of time or 99. Whatever percent of the time. I'd like to, maybe it'll be easier if we follow up afterwards with some people just to see if we could break down even further, because I know that your numbers were for all time. And so I'd like to kind of distinguish between certain peaks and norms and like when things are flat versus when things are spiking. Like you were talking about. It's funny because like from a wallet perspective. Yeah, we were kind of trapped because we were trying to take into consideration. The user's intent, which is distinguishing between when a user wants now or never versus whenever versus like, I really want to get this in and it's got to be a SAP and I want to keep trying until it gets in. You know, I think that that urgent but also extended timeline versus just now or never and of course now or never is much easier. We do have plans in our UI to kind of be like hey things are going crazy right now. It might be better just to try again later sort of sort of thing when things are spiking. But we would we're kind of hoping that I don't know if either scan or some other APIs are here but we're kind of hoping that this could be more of like a math problem that is solved by someone who could just give us the numbers that we want for these different scenarios versus, you know, like so that the user doesn't have to do the math but the API can do the math and we can just give them an appropriate suggestion based off of the intent that we read off of the user. And I assume that's probably very hopeful. It's so it is possible to do the math like I mentioned it's really like a three dimensional curve. And if you know the inputs for that curve, you can do the math and tell the user okay this is what you should do. The hard part is getting those inputs from the user in a like digital form. So a user who just kind of vaguely says, you know, I kind of am in a hurry. It's not super helpful for for the math side like turning I'm kind of in a hurry into like this is the digitization of my time preference. So if you guys can figure out, or someone can figure out how to distill a user's time preference and price preference relative to each other into like quantifiable numbers. Then we can definitely you know put together a formula that will tell them okay this is what you should do based on you know all of history and you know what we know about the ecosystem all these things. I'm not sure if that's reasonable or realistic at all because I think, like even when I ask myself like what is my time preference, I can't put that into a number like I don't know what my time preference number is. I just know that you know, I kind of want to go in today maybe, or like, you know, I'm going to go to bed soon and I want to be sure it's in before I fall asleep so in the next couple of hours. Like they're very vague numbers for me. I don't know if other people have more solid things. I think it would be. So I can understand like trying to extrapolate users intent into actual inputs to a mathematical function I understand that but I'm wondering if we can kind of chop it up into like maybe three or four different categories or boxes. Right, I mean, yeah, I have to think about a bit more but that would be if that math function is there and then maybe we can give a little bit more thought of like how do we translate user intent or how do we even get a signal about user intent I think we have a few ideas about how to get the signal for users intent. Yeah, maybe we can follow up on that. You might be able to kind of craft some like straw man users, where you just kind of describe a particular person, and then you give that like fake person some actual numbers to plug into these formulas. And then you say you know are you this person or are you this person, that might be possible. I think that would be vague like people won't map exactly to actual real humans, but that might get us closer so instead of just mapping to the now or never person as our only straw man, we now have you know three or four straw man and you can map to usually can then pick one from like a nice little picture book. Yeah. Yeah, I think, I think that would be helpful just and I guess from from our perspective we would. I love that Barnaby had that that slide about how to x seems generally on average too high, because intuitively we also think that we should tighten the bound that the multiple that's placed on the base fee. And if anything kind of like play around with a priority fee instead because. Yeah, because where we are kind of like, even if you're urgent, then you kind of, if you're urgent from now or never you also want like a tighter multiple because you don't want to be potentially waiting, you know, waiting around forever. But if you don't care that much then it's kind of unfair to show you a huge range of prices that you could, you could, you could use so it's better just to like wait around and, and maybe you might get dropped if it's really busy but it's, it's better to give you like a tight and bound like we don't want to have a huge range that we show to a user where on average it's like, you know, a very small subset of that range that you're actually going to be spending. That's what we'd like to avoid. So yeah, I guess I'm just asking for like magical estimations that. I nominate Barnaby to do magic. Okay, great. Yeah, I don't know if we have enough time but on that. In Barnaby's slides you heard there was one example of kind of a normal scenario where there was some variability in the base fee, but I think if the if you were to create a moving average you would see oscillations in the base fee across that. And then there was the other scenario with the, with multiple consecutive full blocks, and the, the issue on the design side is right okay which of these two scenarios are we currently in. So how do we reasonably estimate whether somebody will be included in the next few blocks, based on which curve, the, the, the, everything is currently on. And, and it's almost impossible to tell like you don't know what is going to happen you don't know if the trend is going to continue upwards and it will take a long time for that period of time to pass with the multiple consecutive full blocks or whether it's just kind of the normal state where things also open down. And with that, on the design side it's really really difficult to make a call and say okay actually we're in this situation and it's going to take X amount of time for you to be included. And it's also quite difficult to flag okay we're on a, we're on an upward trend here, and we don't know when this is going to end it doesn't instill confidence and it's quite difficult to communicate. And if somebody figures out how to communicate these potential scenarios and great perfect, but I'm not sure whether it's whether just having to determine which of these two trends you're on is going to be helpful to lots of people. It would be helpful to some definitely but I think, yeah, there's lots of communications that needs to be done here above actually determining those those trends. So the core problem here is anyone who can answer the question of which trend are we on can make far more money by going into finance than we can offer to pay them to help us. Because essentially it's the exact same problem as predicting the future price of a stock or commodity. It's an attempt to predict future demand for an asset which comes down to like keeping an eye on all the things that are happening in Ethereum, keeping an eye on the news, sentiment analysis, like when we get these bursts, they're not burst because of something that's predictable, they're always bursts of like something that happened like an event occurred in the world that resulted in all of a sudden everybody wants to use Ethereum. Now that being said, some of these events are quotes around that are seasonal. And so there really is you know at a certain time of day, every day, the fees generally are lower than other times today, we do see strong seasonality and gas prices. And so those ones we can predict and we can show to users potentially. And they're actually pretty easy to graph like you just look at the especially now we have the base feed that become really easy to graph. So you look at the base fee over time and then you know plot by day, plot by time of day, plot by day and time, and we should see some very strong seasonality, but the ones like we saw earlier, which was like an NFT sale or something. Those ones are just effectively random for anyone who's not, you know, an NFT buyer. And, you know, as NFTs today was ICOs before that it was crypto kiddies before that it was, you know, some some some event occurs in the world that Elon Musk tweets something and triggers it. And so I just want to make sure everybody's aware that it is very unlikely we'll ever fully solve that problem. The best we can do is capture the seasonal stuff and try to present that and let people know, hey, we're on the, you know, morning uptrend, like every morning it starts to go up so we're going to estimate a little higher or we're on the evening downtrend. So we're going to estimate a little lower, like we can maybe do that. But I think that's probably about the best we're going to get, realistically. And one thing I've noticed is that it was especially difficult. I mean, it was really like EAP 1569 went live and we discovered the effects or some of the effects were discovered after the go live. Either already, and I'm not, I will not be aware of that yet, or is it possible to have a test net which will have all transactions replayed. Maybe with a delay of 24 hours or something that will allow us to see the impact of such upgrades. The two challenges. Good. Yeah, the two challenges with that is one, basically the tech to replay transactions is, is, is, is hard. That's, you know, solvable but yeah, we don't have a team or that can help with that. The second part is the money basically so like because transactions on main net are worth something. We get like a different patterns and different incentives to send and like on test nets. So it's, it's, it's hard to get like a, yeah, a perfect replay, basically. And to expand on that a little bit, the one of the issues with replay test nets is that they very quickly fall out of sync. Because if you're replaying under a different rule set, some transactions will fail that previous that on the main net succeeded. And so I guess, I guess in this case you're sorry when we talk about this before we're talking about test nets for future changes. You're talking about test for future changes or do you want just a test net that just replays history so you can do testing like back testing basically. I'm well, I'm thinking is a for example, for the goal live, if a week before your PIP 5659 was it was dependent on a test net. That's every day was getting the main net transactions with a delay. We will have seen some of the issues that we're having now a bit earlier with our new UI, new UX and impact it has. I understand it's difficult, right? I understand it's not something easy to do. But I was thinking you don't really need to reproduce some of those data. Like you don't need to reproduce the from, the addresses or anything. I think that what matter here is the amounts, the tokens in play and all the transactions. I mean, we got the same flow of transactions, the same number of transactions in the system. So we can really see the impact that those will have on the gas and the user drop into a country. Yeah, so the core devs have talked about this in the past. And I think we generally got agreement. This would be generally useful for the core devs as well. Just because it's nice, like you said, it's nice to see real world stuff. The issue that we run into is that when the rules are different between the two chains, they very quickly fall out of sync with each other. And so transactions, like you start with one transaction that fails on the test net, but doesn't fail on main net. And then that leads to the world state being slightly different. And then now another transaction fails because the world state is different and then another one. And this kind of balloons out pretty rapidly. We don't know how rapidly that happens. It probably depends on specific, so the rule changes. But so it means we can't just like spin one up and leave it up forever. Cause eventually the world state will differ so much that we just simply doesn't make any sense to replay anymore cause everything's failing. That being said, we can, we talk about doing things like having a daily or weekly reset or something. So it's just, so we constantly do have real world transactions being replayed on a test network using new ruleset. But we just reset it periodically to make sure the world states say stay in sync. And that is an option. The, like I said, the core devs generally were favorable to this idea. It's just a matter of core devs are overworked and we have to choose. And at least for London, we decided not to do this. But we did talk about maybe for the next big feature fork or something, we will want to do this. Like I said, the general interests is just a matter of prioritization. Yeah. Thank you. And yeah, Liggy, you had your hand up the wall back and we never got to use so. Oh yeah, that was to the other thing. Basically it's not only to say it's not only personas basically where you need to make preference. It's also the transaction type. For example, I'm the persona that usually doesn't care about the timing. But then when it comes to uniswap transactions that has a time out, then it's a problem. Then you want it fast in there. I once made a post on magicians about that, that we should make a way to signal that. So either via Nutspec, so that contract authors specify that on Nutspec or we added to the RPCs so that we as wallets, it's also good for the user experience because then we have less cognitive load on the user. So if they don't need to decide, it's better. But it's also an important signal because then also it's happening for a lot of users. They don't really know about the time out or the expiry of the transaction. And so basically the transactions can also signal that they want in very fast persona. Yeah, I would love to see that. We're already a bit over time, but we're still here. So any final questions or comments? If I wanted to follow up with Barnard Bay or Micah, should I just do that on the Discord? Yeah, so we have a 1559Dev channel, which probably makes sense to use. Yeah, and then relatedly, does it make sense to have another one of these calls? Probably not like two weeks, but like, I don't know, does it help people in like a month or like when people have had more time to like begin to this? Yeah, we don't have to schedule it now, but yeah, the people generally want another one of these calls about 1559 and if so, when would be like the right timing? So there's at least one, yes. I would like at least one more call after you do a little bit more back and forth first. So probably a month makes sense. Cool, yeah, OK, so let's yeah, let's aim for a month from now roughly all, yeah, it's probably easiest to have it be literally four weeks from now when it's like not all Core Devs or something. Yeah, so I'll make sure to set that up. To your last answer, Tim, are we combining the 1559 channels into just the market? Yeah, there might be like, yeah, so there might we might rename the channels on Discord to have it be just fee market. So if you can't find the channel with 1559, fee market is basically the same thing. It might make sense, I guess, just given this and that we're having another call in a month. I'm I'm finally just holding the change that. Yeah, I don't think it has to happen right now. But yeah, if there is no more 1559 channel, just search for fee market. And if you're not sure, just ask anywhere into Discord and somebody will will will share the link. Yeah, let me just share the link to the Discord in the chat here. There you go. Cool. Anything else? I recommend anyone who joins that mute the channels. You're not interested in. There's a lot of them. Yes, this is all of the Core Devs channels across all of the research. So yeah, definitely we're muting aggressively. Cool. Well, yeah, thanks a lot, everybody, for coming on. And yeah, I'll I'll share the information for the next call when it's when it's all set up. I'll see you. Thank you. Thank you.