 So we are recording to the cloud. Thanks everyone for coming to the fifth 1559 implementers call. Just shared the agenda in the chat. Basically, there were a couple of things I wanted to cover today. First, just to get kind of a status update on both the implementers and researchers side. So I think Abdel and Barnabay can help cover that. We had the merge transaction pool PR to talk about what I've got decided async. So that's already merged in. And the 200 things I guess I'd like to get people's thoughts here on are the survey I just shared last night, which gave a lot of I guess projects concerns about 1559. I'm mostly interested in like the stuff that relates to implementers around JSONRPCs and op codes and whatnot. And if people have suggestions of how we can plan to just include that to make it easier for projects to test. And then there was this other document like the main net readiness checklist to just kind of walk through what are the things we'd like to see from 1559 before it gets ready to bring back the all core devs for main net considerations. I know last time we had talked about moving to like a proof of work test net. So I'm curious to get everyone's thoughts on that and what the best next step is from where we are right now. Yeah, so maybe we can start with updates. Abdel, do you want to just give a kind of overview? Like I think it's been like a month since the last call. So what you and the other implementers have been working on? Yeah, so we have been working on implementing the latest changes from the specification. So the computation of the base feed has been changed and we updated the implementation accordingly. We already deployed the test net from scratch and we were able to sync also with an determined client which is great. And now I'm finalizing the remaining changes, the removal of the transition period and also the use of a single transaction pool. So that will be available, I hope tomorrow and I will restart the fresh test net with a version aligned with the latest specification. We are also aligned with about the gas price behavior. So Mika submitted a PR and I approved it and it has been merged. So we decided that the gas price of code should return the effective price the user will pay. Actually, this is the minimum between FICAP and minor bribe plus the base feed. And yeah, this is it. And yeah, I was able to spam the network and reach almost the maximum block elasticity. So I was able to target 38 million gas blocks. And yeah, everything was fine. So this is pretty cool. That's it. That's great. Do you know, you mentioned like you're working on the latest changes of the spec. Do you know where the go implementation and the nevermind implementation are at which regards to that? I think like client is on the call so maybe he can give an update about NetAmind but I know that get people. So Vulcanize is still investigating the consensus issue. So I'm giving them some help using the transaction sender tool to try to reproduce the consensus issue. Okay. And yeah, do you know about NetAmind? Yeah, I said that I think that there is someone... I don't think so. Like time for NetAmind. Oh, sorry. Okay. Sorry. So yeah, NetAmind, so they are also aligned on the basic computation. And I saw in the chat that he already removed the transition period. So that should be fine when I will deploy business nodes. We should be able to sync again. Great. I know someone kind of jumped in to say something. While you were talking. No? Okay. Cool. Barnaby, do you wanna give a quick update on the R&D side? Sure. Yeah, thanks, Tim. So recently I've published a notebook on strategic users. That was the latest, let's say, public release. The idea behind it was to kind of look at this ID that, well, 1559 is useless because anyways, users will keep competing on the tip. And I think what the notebook really shows is that you can have this sort of strategic behavior but it doesn't last very long when the network is not subject to white, let's say, shifts in demand, which is most of the time. So yeah, that was published. Maybe I can drop a link in the chat after. That would be great. We've been working with Fred, who's on the call, looking at the transition period out of the legacy transactions and into 1559. So trying to model it, trying to simulate it and even trying to look at an ID that was floated around the Discord channel to have some kind of tax on legacy transaction where the tax is increasing over time, which is kind of like the, let's say the stick to the carrot of making users shift out of using the legacy transaction and into 1559. So we intend to model it. Then another notebook on the floating escalator or the combination between 1559 and escalator. So trying to understand a bit better like what it looks like. I know that the escalator hasn't been like really talked about for some time and I feel the consensus is more like, okay, we should just go ahead with 1559 and not really bother with the escalator. But anyways, I think it's still interesting in terms of like research, even as let's say an extension to this strategic behavior notebook. So yeah, that's under review and it should be published also fairly soon. And the last one that I've been working on and that I think is quite nice. So when the, let's say strategic user tackles this ID that 1559 is just going to degenerate into like a first price auction. The learning users, I'm trying to tackle the ID that 1559 is a UX improvement. So this is, I think is not really understood really well by users or by whoever is looking at 1559. Like what do we mean by UX improvement exactly? And so what this learning agent notebook is trying to show is that over time agents learn to either take the price that 1559 is giving them. So basically the base fee or leave not even enter the queue. And in that sense it's a UX improvement because over time you're learning to just become like a price taker. So the market is just, okay, it's a hundred way to get in now, take it or leave it. And that's it most of the time. So, and so you see over time with these learning users that after a while, like they understand that, okay, I should probably just either take it or leave it. And you can really see then like this idea of UX improvement dynamically, let's say appearing just from the interactions of the users. So I think it's quite interesting. And then related to that, I think it's something I've discussed maybe in the Discord channel but looking at wallet defaults. So this idea that most of the time you're a price taker but sometimes the base fee is shifting very rapidly. So you have like Uniswap launching their token or something. And then you might not want to be a price taker anymore. Like then you might want to revert back to this strategic behavior which I look at in the first notebook. And in that case, probably you also want your wallet to kind of shift from this price taker or at this price quarter mode to a mode where it gives you more flexibility to say, no, I really want this transaction to go in quickly. So I'm willing to pay a much higher premium. So when should that be? Like when should you switch from one mode to the other? And what should the defaults be in the wallet? So most likely defaults would look like what you currently have in MetaMask. Like fast, medium, slow, something like this. But how should we set these parameters? Yeah, that's kind of where I'm at at the moment. Yeah. That's great. Yeah, that's a lot. I had a couple of questions. I guess the first one around like the transition period, do you think like that still makes sense if we've like removed it from the EAP with Mika as recent PR? I should have specified that the transition period we're looking at is Mika as PR. So I'm not looking at the previous model of the trend. Yeah, yeah. And I'm looking at Mika as where you cast the legacy transactions into the 1559. Okay, okay. Yeah. Okay. Got it. Cool. Yeah, anyone have thoughts, comments, questions? Okay, in that case, yeah, I guess I can share my screen real quick. So Kooja, myself and a couple other folks from Catherters spent the past few weeks reaching out to a bunch of projects to get their thoughts on 1559. So there was a lot of feedback we shared a report detailing most of it. I'm not sure most of it is relevant for this call, but the bit around implementation really is. And so I was curious to get people's thoughts about how we can address kind of, how we could address the things that people mentioned would help them prioritize 1559 support. So we asked projects, you know, what would make your life as easy as possible to support this? And obviously the first thing that came up or the thing that came up the most often was having a public test net, but especially having one that's suitable for like end user applications to use. So that has JSON-RPC support for 1559. And it was also mentioned that it would be great if this was kind of standardized across clients so that there's not like any major differences between how it would be to, and sorry, go ahead. Yeah, I would suggest something about that. Instead of implementing the RPCN points in each client, I would suggest that we implement only one microservice dedicated to that that will take 1559 transaction parameters and we'll create and sign the transaction and submit them to Ethereum client. Unless we think we will have it in production on mainnet, but I'm not sure. I think we can leverage that and avoid that every client implemented. So that would work for sending but would it also work for reading transactions? Cause I think that was one of the other concerns that came out, like just being able to query the transactions and whatnot on the network. I see what you mean. Yeah, like how do you expose them right now in the Block Explorer? Basically, oh yeah, maybe we can update the front end actually and implement the decoding logic. Actually, yeah, that would be easier actually to display 1559 transaction parameters directly in the Explorer. Yeah, but I guess what I'm wondering is, how does the Explorer get the data from Basu right now? Like, how does it query it? Oh, I don't remember the exact endpoint, but yeah. Because there is such a, I feel like if there's something already in Basu that at least the Block Explorer we have used, maybe that's a good first starting point for something we can like standardize across clients and just make it a bit more explicit. So it might be worth looking into that. Okay, yeah, I will do that. Sorry, yeah, just taking a quick note. Yeah, and then obviously the other thing people mentioned was like having it be part of a network upgrade. I'm not sure we're quite there yet. And then this might be interesting for you, Barnabay, but the incentive, like a couple of projects mentioned, like if there was any incentive with regards to gas prices specifically to use 1559, they would prioritize it. I think Mika's PR kind of gets us half of the way there at least, right? Where if you can keep using legacy transactions, you'll just pay a higher tip to the miners. So I guess the converse of that is like, if you use 1559, you'll pay a lower price. Yeah, I think that's maybe sufficient to start. But I'm curious if other people have thoughts about that. Do we understand from that that like the project is incentivized to implement 1559 so that its users get to expand that? Yeah, okay. Yeah, yeah, and that was like, I guess the common theme for the projects who are most willing to implement 1559 as soon as possible are projects who really cared about like their user's gas price experience. So I think that, yeah, having their end, the end users of someone like Argent or Gitcoin be able to pay a lower gas price was a good motivation for them. Okay, yeah, yeah, that makes sense. Yeah, not incentives to pay them to implement. No, no, no, no, yeah, it's like the net, yeah, the transaction isn't. All right. Yeah, yeah. Yeah, and the other thing, so the other thing that was mentioned is like having, obviously like the basic libraries, so ethers.js and when three.js support this as soon as possible would help because a lot of projects basically just rely on that. So the ethers.js maintainer said it should be pretty easy for him to add support for it. The other thing that was mentioned is, yeah, just having like a clear upcode definition. So a lot of projects, smart contracts rely on transaction.gasprice. I think we need to understand what are the implications of changing that. So right now, from what I understand, the change that was made would only affect 1559 style transactions, which shouldn't break anything that exists. But I don't know if there's some weird kind of, I don't know, second order effects for contract developers that the API changes what it returns based on the type of transaction. I don't know if anyone has thoughts on that. I believe we're pretty safe on that front. The way we ended up setting gas price for 1559 transactions makes it, so it's basically still the same thing. So it means this is the gas price that the user paid. The one caveat is previously, for legacy transactions, the gas price a user paid and the gas price a minor received are the same. And 1559, the gas price a user paid and the gas price a minor received are different. And so previously, the gas price opcode could have been used to identify how much a minor received for the transaction, theoretically. And also used to identify how much the user paid for the transaction. With 1559, it only represents how much the user paid for the transaction now. I don't know of any applications that care about how much the minor got paid. There are many that care about how much the user paid. And so that's why I went with that. Is that worth adding to like the security consideration section of the EAP? I feel like- The backwards compatibility section. Yes, send me a message after this and I can go ahead. Okay, I'll write a note for that. Yeah, I feel like somebody might look at that and find something with it. But that makes sense. And I guess the other thing we discussed in the past is like the base fee opcode. That's not part of the EPF, right? It is not. And there is a push- No, there's a push currently from the core devs for various reasons to actually get rid of gas inspectability in general from the EVM. And so that would probably hurt our chances of inclusion if we're adding things that make it so people can inspect gas stuff. Okay, and I guess so right now the only way to get the opcode is to get the block header, right? Yeah, so you could prove it on chain. So you could get the transaction proof and then prove it based on the block hash if you really wanted to. But that can only be done afterwards. So that'd be the next block that you could do that. Yeah, and I think that kind of relates to the next point is people would like to see kind of an API that takes, you know, that tells you what the base fee will be for this block. So you take the previous block, you calculate how full it was, and therefore you estimate what the next block's base fee will be. I'm not sure this falls like within the skills of people in this group, but something like that, the sort of ETH gas station-like API that just does that map for them is something people mentioned that would make it easier for them to add support for 1559. I don't think that'd be particularly difficult. Yeah, yeah. And as I said, I want that from the clients. No, no, no. Or they want that just like a place they can go on the internet. A place they can go on the internet. And yeah, this call is recorded and will be uploaded to YouTube. So maybe somebody picks this up. ETH gas station, if you're listening. Yeah, that was brought up. And then, yeah, obviously, the rest was kind of pretty standard, but just having good documentation, like we just mentioned, I think around the upcos and explaining what the changes in behaviors are, communicating changes to the EAP and whatnot and having channels for support. And I think with the Discord there, it's been kind of a decent place so far. If the volume grows, we can maybe move this to some other place for support specifically. But yeah, that was kind of the list of what would help various projects kind of implement the EAP. One thing that was nice in this survey as well is there was like a pretty, yeah, there was like a pretty smooth distribution of like when projects would like start, would want to start working on the EAP. So I feel like as this develops, we'll probably get more and more users who are slowly kind of tripling in and are interested. So it's nice to just start with kind of a smaller batch of people who are like very interested in this and want to see it done ASAP and then slowly reach out to the more projects. Yeah, so that's basically what I had on that. And then the last thing I wanted to bring up was just this kind of mainnet readiness checklist. So I think a lot of people in the community had been asking for a date for 1559 because that's obviously impossible to give to people. The other approach is to give them a list of things to do and try to obviously update it as we learn new information and we make progress on it. So in short, obviously we'd need all clients to have an implementation. Right now, Gath, Basu and Nethermine are working on it. Nethermine, I believe, is still hiring someone as well to do this. So if you're watching this and you're interested, you can click the link and apply for the job. Open Ethereum and TurboGats are fine with joining the implementation later. I've talked with them and I think they don't have as much interest in implementing every incremental version of the EAP but once it's actually done and settled, it shouldn't be a major challenge for them to implement it, especially with the recent changes to the transaction pool and whatnot, that makes it a bit simpler for clients to implement. In terms of like the open issues, I think the biggest one, we discussed this on the last All Core Devs but it's the denial of service risk on mainnet. This is something I don't think EIP 1559 can address head-on and there's a couple of efforts that are being done to address this. So there's EIP 2929. Geth is working on Snapshot Sync. Basu is working on another flat database approach that makes these denial of service attacks less likely. TurboGet from the start is optimized to deal with that as well. And so I don't think, again, 1559 can like directly address that. When I asked about changing the block slack limit, people didn't seem to think that would make big enough of a difference. So going from 1.5 to X instead of 2X didn't seem like it would make 1559 much more likely to be adopted sooner but it's really more about having like client level, basically databases that store the state in a flat format instead of a try and everything that goes around that. I don't think it should have a major impact in terms of timelines if given that there's still work left on 1559 that it won't be in the Berlin upgrade. I think it should probably land in the upgrade after that. And that also gives time to clients to work on that. And then the next step, the transaction pool management, this is basically Moot due to Mika's PR. So I'll update that. The transaction encoding decoding was the other big question. And I know, Abdel, you've mentioned in the past that EIP 2718 would make this easier. I'm not sure what actually is the status on 2718. It seems like it's kind of in limbo for Berlin. I don't know if anyone has kind of a better view on it than me. It's in limbo for Berlin. I almost certainly will go in with or prior to 1559. I don't see really any reasonable path where it doesn't go in. There's enough things depending on it that it's gonna go in either Berlin or right immediately after. Okay. And does it make sense to, I guess, keep doing what we're doing for now? And once it's accepted, we adapt 1559 to support it? I don't think so. If it were me, I would just switch everything over to 2718 so we don't have to deal with it later. I think the odds of 2718 not going in are so low that we should just move forward with it personally, but I'm not an employer, so. We could do something like if we don't want to delay security audits and all that stuff, validation of the economic model, we can deploy public test net with actual implementation because the type transaction envelope, we don't change any of those results. And on the integration test net, we could start implementing it. Maybe we can do this. So have like two versions of it once. So once we have like a more public test net, then we get to that? Yeah. I think, yeah, maybe that makes sense. And it also gives us a couple of weeks to see what happens on the Core Dev side. And if it gets accepted in the next Core Devs call, which is next week, I believe, then it'll be a bit clearer where things are at. Okay. And then the last thing was the transition period. This is also kind of, I guess your PR Mika means that there's no more transaction period at all, correct? It's just we convert legacy transactions to 1559 and we allow that forever. Or we interpret them, sorry, as 1569 and we allow that forever. Yeah, where forever means TBD. Yeah, yeah. We have no currently built mechanism for getting rid of them, at least on 1559, but some future EIP probably will, maybe. Yeah, yeah, yeah. Cool, that makes sense. I think there is another thing that came out from the Discord channel as replaced by Fee. Oh, yeah. I don't know if we want to talk about that now. So I thought, I think Mika has some ideas about how to deal with that, adding some transaction parameters. Can you explain that quickly? You shouldn't, Mike. So there's a few options. The, I think I lost track, so Barnaby may know more, but the one, I think the first question is we need to establish exactly what everybody expects from replaced by Fee protection. So if you just do replace by Fee naively and you just say, hey, as long as the fee is higher, then you can replace it. You can replace a transaction with one out of ETH, so one way gas price increase. And it's effectively a NOAAB, but it will force the whole network to propagate your transaction again. So this is the data source attack vector where you can just bump the transaction by insignificant amounts forever and just keep hammering the network and the network will continue to accept your transactions. So we want to avoid that. The problem is that with one five out of nine, if you bump just the minor bribe, there's no guarantee that the whole minor bribe are taken because you couldn't be hitting the fee cap. And in fact, it is most likely that if your transaction is pending for more than a block, you are blocked by the fee cap, not by your minor bribe. And so if you're just bumping the minor bribe, then we end up with the same situation where someone can just keep bumping my minor bribe and not actually change their transaction at all. They're not paying any more. And so there's some concerns about denial of service attack vector. The, if you just bump the fee cap, similarly, if your transaction is not pending because you're blocked, then that also does nothing. Like you pay the base fee and if the base fee is below your fee cap, you can bump that to 40 million and you're still gonna pay the base fee. And so bumping that doesn't actually change anything. So the last option is, well, what if we bump both? And bumping both I think does work in most scenarios. I think there's some very edgy cases where it's possible to not have your actual fee change when you bump both, but arguably we don't care that much about those edge cases because they're not really strong denial of service vectors. And as long as we have a minimum increase, then it also doesn't matter too much. The last option is to just say that nodes will not propagate any transaction whose base fee is, sorry, whose fee cap, maybe fee cap plus minor bribe, not sure, which probably just fee cap is less than the current base fee. This is a novel idea that we think we probably need to spend a little more time thinking about, but in theory, if we did this, then all transactions that were being propagated shouldn't be able to be included in a block almost immediately. Like the only reason they can't be included in a block is potentially because they're minor bribes too. What this does tell us though, that we also need to talk about is if the minor bribe is zero, let's say, we currently don't have a mechanism for pushing that out of the pending pool. It is possible to set a minor bribe that is below a minor bribe that every binder is accepting, but have a fee cap that is higher than the base fee fee cap. Should that transaction be allowed to propagate? And if so, how do we define what the minimum minor bribe is for a transaction to propagate? Do we do it like we do currently, where we just say every node in the network has a propagation variable where they say if we were willing to propagate any transaction that has a minor bribe, this or higher. That's probably the simplest solution. And we just hope those are generally set in line with miners. These are all the things to think about and discuss. I'm currently favoring that last option where we say the nodes will not propagate any transaction that has a base fee lower than the current block's base fee, sorry, a fee cap lower than the current block's base fee, plus a minor bribe set per node at startup. So each node can define what their minimum minor bribe is and they'll propagate everything else. So we don't propagate those transactions on the P2P layer, but we do accept them on the RPC endpoint if the price is below base fee. Yeah, that would be my assumption. So that way your local node will always accept your transactions from you. You're talking directly to your own node is gonna accept everything. Just like it does right now. I believe. Okay, it's not the case in basic implementation. So I reject this section, but yeah, I will update the implementation. Okay. So I think, I believe the other clients, at least Open Ethereum and Gath and I'm pretty sure another line, if you were talking directly to the RPC, it will accept anything because it treats you as kind of a privileged user when it comes to what it accepts. And so it will accept it. And I think they all actually have a separate pending pool sort of where transactions are protected from being ejected from the pending pool on that node if the node receded over RPC, not over the P2P. There are some rules. There is a minimum gas price and also a minimum bump percentage. But yeah, if the transaction comply with those rules, it will be accepted. And this is what we do for legacy transaction, but we implemented a different behavior for the 1559 transaction. But okay, I see what you mean. So like I said, that's my current preference is that we go with the, we basically just don't propagate anything. There's kind of minor bribe that node things is too low or a fee cap that node things too low. And then we can basically, I think we can allow basically almost any strategy for fee bumping, for replaced by fee because the things that are being propagated are all things that should really probably be mined next block. Like it is very likely that the thing that's being propagated is gonna be mined very soon. Because the base fee is high enough and the minor bribe is high enough. Okay, that makes sense. Thank you. Do other people have thoughts on strategies there? Wait, you could still increase your fee cap indefinitely even if it's above the base fee, right? Like I agree that with this idea that you drop transactions where fee cap is lower than base fee. But how does that prevent, how does that alone prevent me sending a thousand transactions with just a little bump in the fee cap every time? Like you still need the bribe and fee cap? Yeah. So I think we still do need a minimum percentage just like we have currently on the network which is I think 12.5% for a guess in an open Ethereum I think. But I think it matters less whether that's a fee cap bump or a base fee bump. Maybe. Or a bribe. Or both. Sorry bribe bump or fee cap bump. Or both of them. Like if we're kicking out transactions that aren't likely to be mined soon and we have something that the user has to keep increasing. I guess that does have to be minor bribe, doesn't it? Because if it's base fee then they can spam. Yeah, you still need. Yeah, okay. It's fine to have like the base fee greater lower than your fee cap but I think you're still doing some kind of bump. Yeah, we'll set the end. So the minor. Okay, so the minor bribe has to be bumped by some percentage. Say we can keep it the same 12.5% if it's easy. And then the fee cap can stay the same but if fee cap of a transaction is lower than the last block's base fee then don't propagate it over the network. That sounds good. So I suspect as soon as we tell this to the Ethereum client developers, they're gonna tell us that they're gonna grumble about the DevT2P layer currently isn't synced with blocks for lack of better term. Like they, because of rollbacks and whatnot the DevT2P layer doesn't really know what the current state of the network is. And so client implementers historically have been very pretty low to create a dependency there where the P2P layer needs to understand what the state of the network is because you can get out of sync. Like two clients can not agree on the current state of the network. So client A will say, hey, I've got a new transaction. It's got a base fee that matches or is higher than, sorry, it's got a fee cap that is higher than the base fee. The node you're sending it to however sees a different view of the network. And so they say, no it doesn't. You're lying to me and you're now a bad peer. And so we have the problem now where how do we tell whether a peer is bad or a peer is just on, have a different view of the network. And so I think for that reason historically P2P layer has not, it's correlated with blocks at all. Like they try really hard to not care about what the current state of the network is. Yeah, that feels like it would make things much more complicated if we needed to add the dependent, like if we needed to change kind of the statefulness of the FPP protocol. But I mean, there are higher layer. You can do that in the transaction pool or something like that. You can flag the transaction as not eligible for inclusion in the P2P network. And you, yeah. I think it's manageable. Okay, yeah, I think, yeah. An artist in Bezu, I don't know, get in us, but yeah. Yeah, I guess, yeah, what I'm saying is I would push for, similarly to how Mika, you mentioned, you know, like the adding the base fee opcode kind of goes against the current of the Cordev's with regards to gas observability. I would try to keep things somewhat like philosophically compatible with the FPP. But if we can do that verification just at the client level before we propagate it, I think that makes sense. I think it works as long as we don't, it's not a condition for flagging a peer as bad. Like I believe the clients all have mechanisms for flagging peers as bad peers and eventually disconnect from them. We would need to make this a condition where you say, thank you for the transaction. I still trust you, but I reject your transaction. And I don't know if we have that concept at the moment anywhere else. Like it's usually either you receive something that is very valid and you can assert, this is good, thank you, or you receive something that is bad in which case you say, you're a liar and I'm kicking you off of the network or I'm disconnecting from you. I don't know if one of the client does my note better. I don't know if we have anything that's kind of like wish you washy like that right now. This is bad that we don't have get people on the call. We should have some next time. Yeah, we can follow up offline with them and with other client devs to see what they think. But clearly the whole, I guess, replaced by fee is kind of a big open question. We still need to figure out. Okay, okay. I want to point out as well that either way it's going to be, there's going to be some amount of complexity. Like even if you don't want to do a statefulness when you manage your transaction queue and when you want to check, like if you don't have a rule, for instance, that says refuse any transaction where the fee cap is not high enough, you still need to look at your transaction queue and update the order based on where the base fee is and how that might change the actual tip that you receive as a miner. So at some point in time, I think you do need to take into account the fact that base fee is moving and that the transactions, the validity is depending on that as well. But we can do that at the client level, right? Like we don't need to do that over the P. And also you can manage the delta between, because you know that the base fee can go up or down up to one on eight. So you can have an idea about how many blocks it would take to, in best case, catch up with the transaction price and you can evict or reject transactions that are really far from the base fee. Yeah, I guess you can have like many different strategies as a client. Yeah. But so the FP2P is not considered part of the client. I understand that. Okay, okay. Yeah, that's what I was using, thanks. So it is part of the client, but it's a different spec. It's not a- It's a, yeah, different protocol, yeah. Yeah, whereas the transaction pool is kind of left to, there's no rules about what clients have to do with it. Each client can do whatever they want and they don't need to agree with each other about how they handle it. Okay. Even though like in practice, most of the behavior ends up being the same, at least we don't have to write a spec for it that says this is how the transaction pool works. So this is what makes it easier to do it there than in the FP2P. Okay, yeah, that's fine. Okay, cool. So I'll add that and try to summarize this conversation here in the open issues. Other couple of things that were just on the list of the testing in general, I think we haven't spent much time on. I know Adele, you mentioned like, we should maybe start thinking about like reference tests and whatnot. I'm not sure if the EAP is like stable enough for that yet or yeah, what are your thoughts on that? I think for example, the basic computation is stable enough to start some kind of reference test because otherwise each client team will implement. Yeah, we will not leverage the work and that would be good to have this kind of test to ensure that and it will also help other teams when they will want to implement the spec, let's say when trouble get and open it around to implement the EAP that will help them also. So yeah. Yeah, I think we can probably start slowly writing, adding some and like you mentioned, I think on the parts that are finalized. Yeah. And then, yeah, so we kind of discussed this already. So with the community testing basically, the JSON RPC, Abdel, you said in the chat, I think that right now the block explorers using get transaction by hash, so that already supports 15.59? No. So how, sorry, so I guess I don't understand, how does the block explorer get the transaction information? Currently, the block explorer only display the legacy transaction. So for example, you have a zero gas price for... Okay, got it. So we need the... Yeah, we need to update this endpoint to add the minor bribe and probably even the base fee that will be... So the base fee is in the block header though, right? Yeah, yeah, but we have the block hash as the response of this endpoint so we can query to retrieve the block header. And I think there what would probably be best is to just come up with a spec that both we vulcanize and that they might agree on before we implement it. Because again, that came up, like I know with a lot of like the tracing APIs and whatnot, clients have very different behaviors. And as part of the 15.59 conversations that came up, like it would be great if the behavior here was pretty, was the same. So I think, yeah, it might make sense to just see it. If it's, if no one has like super strong divergent opinions, we should just come up with a spec and do that. So currently for legacy transaction, we exactly have the same output. So yeah, we can still do the same for the two new parameters just aligned on the names and we can just take the names from the spec. So it will be this. Okay, cool. So just so I understand that the ideas is get transaction by hash will still work as normal. It's just it will also include 15.59 transactions and they will have a couple of different fields. Yeah, exactly. Yeah. And yeah, I guess, yeah, let's just ask other clients before we commit to that, but that seems reasonable to me. And would that also ETH block with the true flag for full block? So one that returns all transactions. I think it's get blocked by hash. I believe this is the one. But that also do it? Yes. Okay. I believe there's only two that return transactions. Is that correct? Yeah, I believe so. Yeah. Okay. Do we have plans at the moment to introduce or support 15.59 transactions for ETH send transaction? So this is what I talked about earlier. So my opinion on that, if this is only for testnet, I would suggest that we implement a common service for that and we just deploy it in the same infrastructure as the testnet. So that client, a client implementer, not client but wallet providers and people can start playing with that without waiting for MetaMask or Web 3.js to add the new field. And for, yeah, I guess if you want to use that on mainnet, you will have to implement a new endpoint to submit a 15.59 transaction unless you use an external signer, but yeah. Would it make sense to have ETH send transaction just support either 15.59 or like six transactions? Yes, that would be best. You make some fields optional and yeah. It would probably be good Tim to have, make sure someone's tasked to actually writing the specs for those three. It'll be three new EIPs. Oh, so the changes to the JSON RPC have to be separate EIPs? Yeah, well, they should, I mean, don't have to do anything of course. Ideally, yes, they would be, there'd be an EIP for each of them that specifies the changes that are being made to the JSON RPC and then from there clients can implement it and wallets providers can implement it and MetaMask can implement it if you're implementing it at the adiata. I thought they were out of scope, the EIPs, no? There are some, oh, okay, they are not core EIPs, okay, okay. Yeah, okay. Yeah, they're not core EIPs, they'd be interface EIPs. Yeah, yeah, okay, okay, yeah. Okay, so we basically need one EIP per JSON RPC call, right? Yeah, and I think there's, I believe there's three of them that we need, get transaction by hash. Yeah, yeah. Get block by hash and send transaction. There might maybe get blocked by number as well. Yeah, because you want to add the base here as well. Yeah, get blocked by hash and get blocked by number as well because you need to add the base here header for if you change 59 blocks, yeah. And how about each send raw transaction? Does that have to change as well? No, that would not have changed because you're just sending a byte array. It's already signed. Okay. Input out, put out the same. There are signs, ignore it. Okay, so there's four of them and we need an EEP for each of those. Ideally, people in the past have done one monolithic one as an editor. I recommend separate, they go through smoother. Okay, so unless somebody on the call right now wants to commit to it, I can follow up on that. I guess, yeah, I'm just a bit cautious because Abdel is a person and he is here to throw it on him, but I can, yeah. I mean, I can take some, yeah, as it would be his occasion to start my first EIP, so yeah. Okay, I mean, sure, if you want to do it, yeah. Okay, great, so Abdel, we'll have you write those EIPs. Okay, nice. Cool. Okay, so yeah, I think this covers JSON IPC, public test net, we already kind of, I think it's dependent on having this JSON IPC a bit more fleshed out and the other bit, I guess in terms of test net is, I'm curious right now, we have the POA network. It seems like there's some small changes to make to the spec before, you know, everybody's kind of all sinking and happy there. Is it worth starting to discuss a proof of work network now or do we still need like a couple of weeks before that because of the changes and of, I know get is still having like the consensus issue on the POA network. It's Nethermine already done. Yeah, I think Nethermine and us are sinking. Maybe like with some recent changes to the EIP, there's some small tweaks to do, but I suspect like in the next week, yeah, so that the mind at least should be, should be on the same network and up and running. I don't know if Nethermine supports mining. Not yet. At all or 1559 mining? No, not 1559. Okay, so I think that's something we probably want. At least like having more than one client support mining before we launch a proof of work test net so we can actually try to mine blocks in two different clients and make sure that they all come out the same and work. Yeah, and in parallel, maybe we can start because we were thinking also about launching a single client test net. So that could be a single client proof of work test net and that will be the candidate to add other clients next, for example, the one to validate the economic model. And do you think we could do that with Beisu already? Is there anything we need to change with Beisu to do that? No, that should be fine. Yeah, we could do that. Okay, so maybe it makes sense. Yeah, to just start off a small Beisu proof of work test net to make sure at least everything works and we can produce blocks and we can run your transaction generation script on it. And in the meantime, we'll see over the next couple of weeks how other clients get ready and what the extent of their mining support is. Cool, and then, yeah, just worth mentioning, I guess. Nethermind was using 1559 as part, as I believe, like a private network or a network they're working on with one of their clients. So I think they might have some data to share on that in the next few weeks or months. And then Falkoine and Seedle both have 1559. The Falkoine devs have joined our Slack, or Discord, sorry. So yeah, if people have questions for them about how the network has gone, they're there and they can answer those. And I guess, yeah, in terms of R&D, the biggest thing that also came out in the survey with the community is kind of the lack of a proper, not like even economic analysis of the EAP, but kind of just like a proper description of the mechanism that people, because a lot of people's concerns when they were opposed to the EAP was there's not even like something to critique, right? There's just like this EAP which specifies the behavior, but it doesn't kind of express the intuition behind it and whatnot. I'm not sure who could help with that, but to me that feels like something that would be valuable, having a sort of, I'm not, I don't have background in economics, so I don't know how this stuff is usually done, the sort of like econ spec version of the EAP that kind of explains why this will actually be better. I know that Tim Rothgarden is working on a comparison of 1559 versus our current model. So I'm not sure how much of it will be covered by that, but I don't know if anyone here has thoughts about how that can be done or like, yeah, ways just those concerns about not having something that specifies the economic properties of the mechanisms that can be shared broadly. Yeah. I mean, I can say that just like the paper where Vitalik introduces 1559 has some motivation and some modeling. And I think it's been a bit overlooked by people who say there's been no economic analysis. That's where it comes from first. Then the EAP was written, which arguably has less, let's say, economic or at least micro-economic like motivation for it. And then Tim Rothgarden, I think is, yeah, his angle is really much to say, well, what do we bring like by having EAP 1559? Like how does it change? Why is it better than the current model that we have? And how do we even quantify what better means, right? So, yeah, I do expect that his report will be very enlightening in terms of framing it. But as I said before in the discord, I don't expect it will be like, yes, we should do it or no, we shouldn't. It's really more like, what is even like the correct way to think about this? Like what are the metrics we care about? What do we mean when we say, so UX improvement and these sorts of things? Yeah. Yeah, and I think that's good actually. I don't think people are looking for like a justification as much as a description. And I think it's probably easiest to describe by contrasting with what we have today. Do you have a link to the Vitalik's paper? If you can send it in the chat, I'll add it to that list there. Yeah. Cool. And then the last bit, I guess Barnaby, I can link some of your notebooks here, but in terms of simulations, you mentioned you obviously kind of all the stuff you're working on right now. Is there anything you still think like is missing after that? Is there like other big areas you'd want us to have simulations on that? Yeah, you think we haven't addressed yet or have had the bandwidth to start working on? Right. So I mean, there's a few things I discussed at the very beginning of the call. Yeah, which is more what I'm working on. So one big chunk that I left out, but almost by design is this idea of minor collusion. Yeah. It's something that we do plan to simulate or at least try to get like a broader understanding of what the behavior is. The reason I'm not focusing on this at the moment is because I do think the analysis by team will be at least useful like starting points. Okay. I mean, it's kind of trivial to define something where it fails or it succeeds automatically, but I think it's not going to bring much to the discussion. So yeah. The thought from that, yeah, I think I should probably help you feel that TBA because it looks like there's nothing, but yeah, I can send you something. Okay, that's great. Yeah, and I'll add all the stuff you mentioned at the beginning of the call as well. So we'll have at least some meat there. And then the last bit was the community outreach. This is still out of date. We published a report yesterday. One of the big things that we mentioned in the report is there was a very small number of exchanges and wallets that answered. So I think if we do more outreach, I'd personally like to focus on those two groups. Yeah, so to just get more wallets perspectives, I feel like exchanges are probably less affected by this and they tend to be pretty reluctant to share data publicly. So I'm not sure how realistic that goal is, but I think on the wallet side, we can definitely reach out to a few more folks and get their perspective on it. So we'll keep on doing that. And the catheters will probably have an updated version of the report. I don't want to give a date, but like in a few weeks to a month or something. Yeah, once we've talked to a bit more people on that end. And that's all I had on the agenda. I don't know, is there anything else people feel we should discuss? Okay, well, in that case, yeah, thanks a lot everybody. This was really good. We'll have full notes for the report and for the meeting and I'll share a summary on Twitter in the next hour or so. Cool. Thank you guys. Thanks everybody.