 Oh, and James just walked in. Great. Also, good morning, evening, depending where you are. This is implementers call number four for EIP 1559. There's a couple things we had on the agenda to cover today. First up was just the status update from the different implementing and research teams. Then I think the biggest part of, I guess, meet for the meeting was just trying to figure out what are the next steps and to get this eventually deployed on mainnet. What do we see as kind of the intermediate milestones to get there? There was also a discussion of EIP 2718, which is the type transaction. And finally, something that came up on Twitter around just, is there ways we can speed up development by adding additional resources if people have thoughts, comments on that, we can kind of finish up with those. So first up, let's start with the updates. Yeah, I don't know if anyone wants to jump in first, otherwise I can call on people. I can go first. I don't have much to update really. Great. So this is Ian from Volcanize, working on the Go Ethereum implementation. So I guess at this point, my role is sort of just to keep the implementation up to date with the spec as and if it changes. And then also to make any fixes or changes that need to be incorporated based on the results of the testnet that the PESO team is running. And so there have been some pretty minor bug fixes in those regards. But aside from that, there haven't been any major updates from my side. I can go next. Abdel from Consensus, working on PESO implementation. So pretty same as Ian said, we are still aligned with the latest specification. We did some bug fixes and we restarted the testnet, including three get nodes and three PESO nodes. So we tried to do some performance tests. We're sending a high throughput of transactions and we found some issues and there are no fixes. And yeah, that's pretty much it. And yeah, last thing, we think that there might be an issue on the spec regarding the base fee. And we believe we should maybe try to define the minimum value. Otherwise it can be a problem. And if we let it go to one or even zero way, it can be problematic and it can maybe never go up again. So we can discuss about that later. That's it. Cool. Yeah, let's definitely come back to that. Just I see Barnaby is also on the call. Do you have an update? Hi. Hello. Not too much. Just keep working on the simulations. New notebooks soon on strategic users. So trying to investigate what happens when you have a sudden spike in demand and users are trying to outbid each other. So trying to add more things to the library to handle this case. And that's it. And giving a shout out to Fred, who's also joining the effort. Maybe I don't know if you want to introduce yourself, Fred. Hey, I'm Fred. I'm going to be helping out a bit with the agent based model and implementing a bit of the other behaviors. I worked a bit with this, but in a first price auction. And now adapting a bit of my work towards the IP 159. Nice. Great. And I see Thomas and Alexia, you're also on the call. Are there any updates from either TurboGat or NetherMind? Not for me, sorry. Yeah. When I came to the call, it's just to see what's going on because there has been a lot of, I suppose, misunderstandings in the, you know, Twitter specifically. And so I just wanted to get the, just wanted to see if anybody would come here to talk about this. Okay. Cool. Yeah, that was kind of the last bullet on the agenda. So hopefully we can, we can get to that. Thomas, any updates on your end? Yeah, sure. So I suppose spending time on the research science and like analyzing, testing different numbers and making mistakes and finding some insight. And I think that maybe within the next two weeks, maybe three will have NetherMind connecting to this, ideally connecting to this Bezzio-Gat test net for EAP 1559. So catching up with everything. Yeah, it seems very reasonable prediction. Cool. Did I forget anyone? Did anyone else have an update they wanted to share? Maybe we should do an update on the funding group. Sure. I mean, yeah, do you want to go ahead, James? Yeah. So we did the Declaring Grant funding round and had a lot of participation with Awesome. The funds so far have gone to funding Ian's work on maintaining the Gath project. And the rest of the participants have come from the EF work. We missed that aspect of what you said, James. That the other participants have been funded through EF or consensus. Yeah. Cool. Just like you and I and Barnaby and Abdul. Yeah. Cool. Is there some like information about the, how, how this some transparency behind how these funds are allocated? So for the Gitcoin grant, we basically, so it's a public multiseg, right? Anybody can use it. We made it clear we wanted this to pay for research and development of the EAP and not go to people employed by the EF or consensus. That was kind of the high level transparency in terms of like the specific transactions, I think, so far. The only one has got the vulcanized. Yeah. Does that make sense? Yeah. Thanks. There was one interesting thing that I seen someone posting about some, some blockchain, like Ethereum Research Foundation or something like this, posting information about funding the analysis, mathematical and game theoretical analysis of EAP 1559 with some mathematician from, I believe, South America. Yeah. So it's Tim Ruff-Garden is the, is the researcher. He's not on the call today. And so this was kind of a single individual who themselves funded kind of this research effort through, I think what's called the decentralization foundation. And basically, Tim is, his background is in computer science and game theory. And so he's going to work on, on, on doing a formal analysis of 1559 and, and basically comparing it to the current fee model on Ethereum today. And, and hopefully, hopefully highlighting, you know, some potential improvements or some, some, some issues with the EAP. Yeah. This is very exciting. I've seen it. I mean, it looks amazing. Does he, will he work with Barnaby together? Because I think this, this work is somehow maybe like, they would probably help each other because on one side we have this mathematical analysis. On the other side, we have this more of the analysis that is running in the simulations. Yeah. I don't know yet. We've been talking in the initial stage. We've had a chat with Tim Baker and Tim Ruff-Garden. I mean, I'm, I try to stay in touch and I think it's very complimentary. I know that he's also planning to publish some open source code. So I don't know how much he will go into the simulations, but I want to maintain the line with him. Yeah. Great. Sorry, probably just outside of Agenda. I'm not talking. No, no worries. Yeah. Yeah. And it's worth mentioning like it is a big, a big kind of stream of work. Good updates. Yeah. So I think it might be worth just like going back to the, the issue Abdel was mentioning with the base fee on the testnet, just to give kind of more right on that. Yeah. Abdel, you want to share more details on this? I was not there. Jan, can you share more details about that? Because if I remember correctly, you, you work on that? Yeah. So essentially, if the base fee currently, the current mechanism, if it ever gets down to zero, it can never go back above zero. That's the hard cutoff. There's also a bit of an issue at other low numbers above zero. For example, at one, the gas usage needs to be nine times higher than the gas limit, for it to increase up to two, from two to three, it needs to be five times higher and so forth. There's some sort of function that I haven't actually figured out that's, that describes this behavior. Right. That sounds about correct. Should we define a safe minimum value then? Again, I think the thing that we did on the ETH2 implementation of 1559 is to just set the minimum value to be twice, either equal to the quotient or twice the quotient. And I guess twice the quotient might be a bit safer. Hmm. Because at, like, I guess at like one where it needs to be nine times higher just because there's not enough block space, that means it's kind of impossible. The other alternative to setting a minimum is setting it so that the minimum change is one in either direction. So like if it's smaller than the target, it always goes down by at least one. If it's higher than the target, it always goes up by at least one. I like that idea more, but just kind of intuitively. And so that means in the worst case, you kind of go from one to two to three and it takes you a couple blocks until it starts actually going up. It'll take you, I guess, a bit less than 10 blocks because then you'll be back to the 12.5% right? Right, exactly. Well, it'll take you an extra, you know, eight blocks, but then going up from eight to a million or whatever it is, is going to take something like 100 blocks anyway. So I think even longer, like 200 blocks. Mika has a question. How many blocks between one nano-eath and zero, assuming 100% empty blocks? So going between one-way and eight-way, so that's a factor of 125 million and one over eight. I think that would be about five and a half steps to do a factor of two and then 125 million is about to do the 27. So 27 multiplied by five and a half, 148 blocks. That sound right? Yeah, I'm sure I can give or take a few. Yeah. So does anyone disagree with the idea of having like a minimum increment of one? Yeah, sounds reasonable. Okay, of course. So let's make an action item to change the spec and the implementations to have a minimum increment or decrement of one. And was that kind of the only outstanding issue on the test net? I know there was like a transaction pool issue as well when we tried to put in a bunch of transaction. Did that get resolved? No, I don't think it has. Maybe Corinne can speak to that a bit more. There is a branch that I pushed up today that hopefully fixes that issue and is likely just due to a bug that I introduced on the last update. We will try with this branch and see if the issue is still there. Okay, cool. And those were kind of the two big issues, right? Obviously that we found so far. Yes, yes. Okay. I think this kind of leads nicely into the next agenda item, which was how do we, you know, what are like the intermittent steps to get this eventually on main net? Like right now we have this one kind of small private test net, which has I think like six nodes on it. It's been really useful to find all these kind of corner cases and small bugs. But assuming that like, you know, in the next week or two, the spec gets a bit more stable and that the mind is going to be ready to join as well. Would the next step be kind of a more public test net? And if so, what do we want to get out of that? Yeah, I mean, on the public test net side, it's so that people can begin to experiment on the wallet side. Is that something that we want to get out of it? Or is it more like technical vetting and hoping for more randomness due to user activity? I would say the the second, at least my perspective. Yeah. Cool. I mean, that's reasonable. I think that have spinning up and it's going to be hard to get people to just show up and send transactions that are like semi meaningful if it's not an existing test net. But I'm on how they will. Yeah. And I believe there was a minor on the chat who said they would be willing to supply hash power if it was a proof of work test net. Because right now the test net we're running between basu and geth is a peak test net. So I think also testing the proof of work is an important part of this. If you're looking for block variants, you can just see what you're looking at with the on distribution as well and not have to wait for mining, but just if there was mining. Yeah, that's good. I'm personally concerned that we just test the right code paths. So maybe we don't need competitive mining, but we do want to test the actual code paths that would be used in production. And by code paths, I guess to be clearly referring to the proof of work, the mining ones, right? Yeah, exactly. Yeah, okay. And I think because yeah, I think that'll if you get like a small proof of work test net, you're kind of proving correctness, you know that the e-forces intended, which we're you know in the process of doing on a non proof of work one. And I think the step after that is then trying to go with a test net that has a larger existing state and seeing, you know, this performance degrade on that, because Rick, the last time you brought this up on awkward devs, I think that was the biggest piece of feedback that you got, but like it wasn't clear that clients could process these large blocks, especially with like state access. So I feel like that would kind of be step three. So like step one is what we have now, step two is having maybe an empty proof of work test net. And step three is maybe going to fork something like a rubs in where we can then get like an existing state and maybe get some tooling to adapt to the EAP. Does that seem like reasonable to people? Can I just ask you the question because I am, I am, I excuse my, please excuse my ignorance, but did, does the current implementation imply that the two transaction types where you will co-exist or is the change to switch to the new transaction type? The 2718 transaction type? Yeah, I mean, I suppose that is when the EIP is implemented, will all transactions have to have a new format or there will be possibility to, for all type transactions to be sent as well? There will be a transition period where both transactions are accepted. Okay, sorry. Yeah, during a certain number of blocks, 800,000 blocks and the gas pool available for legacy transaction will decrease on each block. Yeah, okay. And that's it, yeah. And one thing that's nice, I was just going to say one thing that's nice about that, that took me a while to understand is because you have the 2x block size, you know, even when it's 50-50, you'll still be able to deploy a contract that would say take up a full block today because you'll just fill half the block with like your legacy transaction or your 1559 transaction. So we're also, even though you split kind of the available block space in half between the types of transactions, you're not actually decreasing the max block size that someone can use. And so based on your experience with the implementation so far, where is the kind of biggest complexity in terms of the code lies? In which part of the code? Personally, I would say maybe the handling of different RLP and coding decodings based on the transaction, because we don't have the type transaction envelope. But if we make it a requirement for this, I think it will become easier. But yeah, to me it is the pain point of this implementation is about encoding the coding of different types of transactions. Also a big proponent for 2.7.1.8, but for the GIF implementation, I'd say the most complicated area is the mem pool, the transaction pool more accurately. Are the rules of the transaction pool very different for this IP than for the existing transactions? No, not really actually. You're just comparing the gas prices between the two types of transactions, but the gas price is derived from the base fee and fee cap or gas premium in the case of the EIP 1559 transaction. So it's just a different process of arriving at that value. Yeah, the reason I'm asking this question, is the complexity in the transaction mem pool because you then have two transaction mem pools with different logic or just because it's altered in new logic? The latter, it's actually a single mem pool right now, ordering them all based on the gas price. And we do need to update the implementation to rebase on top of 1.919, which adds the deterministic ordering when two transactions have the same gas price. Okay, so the reason I was asking this question is because my suspicion was that the most complexity would be in the implementation of the transaction pool, and therefore, when you previously asked the question that what would be the, what needs to happen for this to go into the main net, I think one of the main things to basically preempt any possible questions or problems that would arise with this particular implementation. For example, is this code resilient towards any kind of dose attacks and tick that box? Yes, it is because it's such and such and such. Can we do any stress testing on this and such and such? So basically, yeah, so I think that would help a lot because then you go into the, let's say, go Ethereum developers and you'll say, these are the things that we're preempting for most of the questions you're going to be asking. Yeah, that makes sense. And I think if we roll 2718, which we're maybe getting ahead of ourselves because I think that's the next item on the agenda, if we decide to implement that first, that introduces some uncertainty into the unlimited uncertainty into the main pool and that there's no real clear defined way to order transactions between all these arbitrary types. Interesting. So it's like on one hand, 2718 helps with the RLP encoding stuff and makes the transactions easier to manage. But then, yeah, if it makes a transaction pool, which is the other most complicated bit more complicated, it's not clear. It's like net win. But we can say that maybe the transaction pool is a bigger problem than the RLP. We can deal with that. This is not clean, but yeah, we can deal with that. Yeah, I just think it's a little underspecified where it's at right now for what we're trying to do. So I think that the spec needs to be cleaned up a little bit in order for us, or completed, frankly, in order for us to really start talking about how it would impact the work that we're doing to Giorgio's point as well. Yeah, there's a couple comments in the chat. I'll just read them so people not on the Zoom call can see, but Giorgio says that 2718 seems to generalize. Do we really want? Oh, is it important to bundle this with 1559? You could add an optional version field, if present, and set it to V2 and decode it as 1559 transaction, otherwise it defaults to the current format. And Mika says 1559 is one of the transaction types that you want. And there's a question about whether we need a generalized versioning scheme for transaction. And 2718 isn't just 1559 transaction. There's a bunch of other transaction types that people are proposing. So given, I guess, yeah, that the transaction pool is kind of the most complicated bit on the get side so far, does it make sense to try and maybe just specify that somewhere else in the code so that, I don't know if the heap is the right place for it, but just so like Alex said, we can kind of proactively address some of the objections around it. But transaction ordering is not the consensus thing, so. Yeah, I know, but at least just saying this is how we did it and kind of explaining that, not as not as like as something people have to conform to just as something people can kind of critique. And it should be added to security considerations in the EIP. Yeah, that's a good point. So yeah, maybe just adding something about the transaction pool ordering in the security consideration section and why, why like, you know, the potential issues can be mitigated. And although as part of the specification, in this strict EIP sense, it isn't as important. In the inclusion in a hard fork, it's really important for that discussion. That might happen more as those kind of processes are separating in a different place. I'm moving my audio is better than it was. It is. Okay, good. So I think it's, I'm just start thinking of it as from not just the EIP specification, but the, how do we get this through the hard fork proposal. And another in another comment I wanted to make about this transition period. I, you know, there is some kind of number, was it 800 sort of blocks or something or 8,000 books, 800,000, 800,000 blocks. Okay. And so where does this number come from? Where is it come coming from? For a month, time per ballot to adapt and change. Is that it? Is that, you know, well, this is, this is it. This is it. Actually, is this going to be okay, but or we need to have another emergency hard fork to postpone this? Because that's how I think it's going to happen. Yeah. So I think that I mean, Alexi, you bring up a very salient point. Yeah, I think that we need to be prepared for a series, multiple hard forks based on what happens when we don't see, you know, a sufficient transition from wallet providers, exchanges, and what have you and consequently, we don't see the shift in transaction volume. I think that's to me, that is one of the biggest, to me, that's the biggest risk because we can all as engineers, or, you know, have these conversations about these engineering problems, but we don't have any, you know, we can find a path to engineering problems. What do we do if, if like, you know, omisego just doesn't change the transaction type and they have tether, you know, what I think that's a pretty big issue. So originally, when I remember in, when we met in 2019, I think was it when we discussed this, and the Berlin. So I suggested the, the basically the dual transaction types. And one of the reasons I did this is because we could monitor the, the uptake of the second transaction type, and based on this information to inform when the transition period needs to end. Because what we really want to see is that the number of the new type transaction increasing, the number of old type transaction decreasing, and then when it gets to certain thresholds, we just say, okay, fine, now we're going to make that mandatory. I don't know, do you think it's sensible or some, maybe some fundamental issue that makes it a bad idea? I think there's a psychological component. I think that there needs to be a hard number and a legitimate threat to motivate people. You have to have a carrot and a stick, frankly. So way back when the stick was we're going to hard fork and the carrot was lower gas costs and the, and the new transaction type. Over time that lower gas cost narrative, you know, people didn't like that idea, which is fine with me. And so this is, we're still trying to figure that out. But this, but this is actually quite interesting point is that, let's say that if we had two transaction pools, I mean, basically two spaces inside the block, one space dedicated to let's say they're equal in size, right? Or in gas sort of limit. And one, one's part can only take the transaction of the new type and another part can take a transaction of old type. And then if the, with all being with everything else being equal, if we can see that the users of the first type actually have a benefit that was promised by the, by this EIP, then you can say that look, these are the transaction in this pool, and they are actually benefiting because they have all sorts of benefits that people are promising. If that doesn't happen, maybe there is something wrong with it. Maybe the modeling wasn't correct. But this assumes that the benefits will come even if users have a choice between a first price auction and EIP 1559. Like it's not clear that given the choice between the old transaction and the new transaction, I might want to choose the old transaction sometimes. But if I didn't have that choice, you see what I mean? Like there's not necessarily like an equilibrium where both transactions are working at the same time. There could be interactions between the two. No, but this is actually going to be AB experiment because essentially you can look at this as a two different blockchains running for two different rules, but we are basically just combining them in one blockchain. And then if you're testing, because you're testing both groups on the same blockchain. So at the end of the day, there's going to be interactions based on the gas price, right? No, that's true. But don't you think that model should win even in that case? Or does it depend on some sort of coercion that you have to force everybody to stick with the new rules? Like it might win, but it might not as well. It's not clear to me that in the presence, let's say 1559, the first price auction is an unstable equilibrium where little by little, you see people migrating to a different format. You might have interaction between the two. And I agree with Rick. I think unless you have some sort of psychological deadline where, okay, that's it. If you don't have the space to do it, then you avoid being in this equilibrium in the first place. But it's an interesting question actually, like if there are interactions between the two, like what does the film market look like? Even for the transition period, like it could help sort of anticipate what will happen during these 800,000 blocks or however many. Yeah, I just saw a question from Mika that he said that the issue is that two developers don't have the same incentive as users. We need metamask, tether and etc. to update since the users can't update without them. So yes, I thought about it just now. But given the fact that this EIP has such a wide support, you would think that basically support from the wallets and stuff would actually be a competitive advantage. However, if the benefits are so weak that we're not even sure that this sort of EIP is going to win against the status quo, then is it really good? Even if the first one isn't the benefits being so weak, I don't think it's the only explanation for what could happen, like the interactions of the two could be complicated and then also just people not like the stickiness of the way that things have been done. So I don't think it's fair to say that because there are other reasonable explanations besides they're not seeing the benefit. But isn't like one of the carrots here kind of block space inclusion? So if you have this 800,000 transition period, at 400,000 blocks, you go 50-50. And then under 400,000, it means more than 50% of the block space is for 15-59 transactions. And at some point, the benefit is like, well, if you want a large number of transactions included in the block without raising the base fee, then you kind of need to support 15-59 style transactions. And then this is kind of where large applications, you know, if you're like a Coinbase or a Pether or something else, and you actually have a significant amount of on-chain volume, you have a really strong incentive to use that block space. And then the first person to use, it kind of creates a race like you want to be the first person to access that block space, so you implement it first. Because otherwise, I guess after, you know, the 800,000 block period, it's like, if you didn't implement this change, you can't send transactions to the chain, which seems like a pretty big distance. I mean, one of the kind of ways to probably address this, I don't know, does the EAP 15-59 fixes the maximum block gas limit or does it not? It no longer does. It's just using the minor set limit now. Okay, so what if you essentially fix the old style block gas limit forever and then, or maybe just make it so that you can only be reduced, but you allocate all future gas block increases to the new transaction type. And in this situation, whenever there is a increase in block size limit, it only increases it for the EAP 15-59, which means that the people who did not upgrade, they still have a functionality, but they will have to cram into much smaller space. And so that actually probably is going to be nothing sensitive for people to migrate. This is my original suggestion, was that we over, almost exactly verbatim what you just said. And I think the complexity of the conversation sort of put people off from that, but if we're coming back to it, I'm very strongly in favor of it. Increases or shifts in distribution? I think we do want to like retire the old version at some point, so like in general, the protocol can't suggest to kind of keep on increasing complexity by adding new types of things forever without removing old things, right? So it's just a question of, you know, is it four months or eight or 12 or whatever amount? I mean, what I'm saying is that we could do these decisions with having some data because we, what I don't want to do is that make all decisions before we even know what's going to happen. So we can first say that, yeah, we can say that now we're going to fix the old block size and then we're going to reserve all future increases for the new ones. And then we look back into six months and say, okay, what happened? Did anybody, did anybody still use the old type? And if not, then we say, okay, we're just doing a ditch it or something like that. So one, I don't know if we're anyone's advocating for block size increases right now. And so I don't know if there's like much room to grow there. And two, I mean, if there's two distinct spaces of block space, they both will be used because there's going, there will be sufficient demand regardless of the fee structure to use both of them. So I don't think you're going to see like the, the pre 1559 half or portion just kind of die because block space is block space to a certain extent. And another, another thing to think about it, it would be nice if the two markets were actually separated. But what will happen in practice is we have the standard market or the legacy market, and then we have the 1559 market. And if they exist at the same time, there's a supermarket that encompasses both of them of people being able to play against them or not against them. But actually, this is going to be great because that means that people have implemented the new transaction type. The fact that they're going to be arbitrage in it. Yeah. But, but that will, will make the one being adopted versus the other information not as as useful because you're not seeing, you're not seeing the adoption, you're seeing the result of the both markets existing at once and then playing against. No, no, no, what I'm saying is that what it is not possible for a third party to modify, let's say if I send the old style transaction, it's not possible for the third party to trustlessly modify my transactions into the upgraded into EIP 1559. It has to be me to create that transaction in the first place. So the only per people who can arbitrage this are the per people who create transactions. And if they do that, that means that they've already upgraded, they can already send the second transaction type. So I don't see it as a bad issue. Yeah. So the way that I envisioned it originally was that 1559 transactions always happen first. So, so there's, you know, you double the total gas limit and then you basically make like a sort of like a special block. If you want to think of it like the block, you know, that much bigger block that double block is split in whatever ratio and 1559 transactions always come in first and then they're ordered based on gas price. And then the old transactions are ordered after that. So if you're in some sort of auction, if you're one of these users that's, you know, using so much of the gas or what have you, you're going to want to switch to 1559 because you're always going to beat whoever in the traditional transaction type. So another possible idea would be that if we want old style of transactions to continue being valid forever, then we just make it valid to include them as being part of the 1559 space and we would just like map the gas price to be that I'm into the max gas price and like say just set the bribe to like some standard like five way or whatever. Like it's a bit ugly, but you know, like it would add well, does anybody do we want to have those transactions valid forever? Like I think but no, no, no, no, I don't think that the idea would be that at least we would be able to retire the whole like basically everything about the world except for the format and like we can retire the format later. I think I would suggest that after certain so we so what I suggest is to basically do at least two hard forks in the first hard fork we we do what I just suggested and then once we get some more information let's say in six months time we if we see that the adoption is happening like people are really migrating and then we can say okay after that we introduce we introduce the exponential not exponential sorry the linear sort of shift of the ratio down to zero so over a period of time the ratio just simply going to drop to zero of the old transaction type. So and so I basically suggest not to introduce this kind of cliff edge moment right now but introduce it after we've seen what happens. But then there's no incentive for people to adopt 59 in the first hard fork right? Well it would only be Karen that like if nobody adopts the 1559 then the base fee on 1559 is not all be tiny and they're all well and they're all just be the space ready for people to claim it right you like basically the economic equilibrium is basically that the ratio of adoption is the same as the ratio of the of the gas elements of the two spaces. Right if you allocate some portion of 1559 it will be adopted by somebody because block space is in high demand. Oh yeah if that is if that is happening then this is a good data to say okay after this we're going to just go do the do the kind of cliff edge not cliff edge but basically gradual reduction of the available space. Another another another option is have have the gradual reduction start 400,000 blocks in because at that point you can have emergency hard forks turned off if nobody's been using the other half but otherwise you don't have to schedule multiple hard forks. Yes I was going to suggest something similar to that so yeah I agree with that. I mean I understand now Mika is saying that we we consume in too much time on the call but I think it's probably worth talking about it because it will make the rollout easier or harder. Yeah and I think personally I'm in favor of something that doesn't absolutely require a second hard fork because so this idea of like having the transition period only kick in halfway through I think is nice because it gives you know more warning although there will be warning by this being deployed on test nets right like if you look at the the whole process um but what's nice with having this transition period in the first heap it means that at some point it goes to zero worst case we have an emergency hard fork to push it back but if it does reach kind of zero block space for all transactions then the second hard fork is really optional it's like do we want to do this to clean up the protocol and make it simpler but if for whatever reason like we don't want to do another hard fork on each one or something like that um we kind of don't have to. And I understand the reputation towards it looks like somewhat relying on an emergency hard fork we needed it but I would tend to the if we if we were able to do something and then say hey we can do something if we really need it versus we have to push something we have to push this to nine months further back that the preference of the community would be we have an emergency hard fork they would rather us move forward. Well this is more I think this is I guess a bit more philosophical question about I mean my personal view that it should be a matter of principle that if you don't don't don't touch it it just keeps working you don't have to rely on something happening in the future but you know other people have a different opinions in that. And what would be so just to make sure I understand this what would be the advantage of having the kind of transition period only kick in halfway rather than linear over the whole time right? I would just say we give more today because another way to achieve that is like you literally plan the hard forks two months later. It's psychological I mean you have to we really it's a game of chicken right I mean we have to tell people we're going to drive off the cliff or else they're not going to do anything. Yeah that's yeah that's what I'm thinking as well so I think I'm kind of in favor of just having the transition period over the whole you know to go from a hundred percent old transaction to 99 98 rather than just giving this kind of I don't know 400 000 blocks of slack which which we can get by just delaying the hard forks 400 000 blocks. I think it's a question of we have this evaluation inflection point that I think is very difficult. I think Alexi is I think it's a good idea that Alexi suggesting that we that we have this point where we as a community make a decision and decide well which way do we go? And I think that again from almost like a game theory perspective what we have to say is okay we've started the car going towards the cliff and now we can like turn the wheel but we have to turn the wheel to like stop from going off the cliff or we do nothing at 400 000 blocks and we continue to go off the cliff. So it's like that kind of game. Yeah yeah okay that makes sense and is there a way we can get you know some preliminary data on that right? Like obviously if it's live on a network then people can start can start kind of playing around with it. Yeah I don't know like what's you know is there a way to test this before we get the main net basically? Has anybody looked at Filecoin yet? Like the data have already? Well the problem with that is every other example is someone implementing something where they don't they don't have you know billions of dollars literally running on an old transaction type and they need to switch to a new transaction type. I mean that for us there's two separate problems right? There's the mechanism there's the new set of mechanisms of 1559 which I think can be verified and reasoned about and are not that you know that's a pretty well-defined problem and it looks like other teams have sort of taken this what we've started here and gone off and implemented that and I think that's fine and then and I think that's pretty like well defined and then there's the fact that we have to have a transition period because we have so many existing users that other chains obviously don't have and I think it's that transition period that really changes the conversation and I think is what gets lost on people is that there's there's a social problem that we have that other teams simply don't have. Yes and also you have to project I mean we could obviously all argue that yeah yes we're gonna just give them a big stick at the end and then everybody has to migrate within 800,000 bucks but it's sort of like you have to be cooperative towards basically everybody else and I think it's reasonable to introduce this and then they say we're going to do another valuation and decide how quick the the the remaining of the transition should be because we might find that in four months time everybody migrated and we're just gonna say oh let's just turn the old thing off or if we see that the migration happens slower you know the the new transaction types is taking on but like there is it takes a bit of time we can say okay let's just do it over next year or something and we're gonna program the linear function to to slow down. There there's also a risk of like I understand the the idea of getting data to then support the the decisions on how to turn off or turn down is great. I still am not sure that the data that we'll get is going to be very easily digestible or or really really save because it is the the union of two markets. It will be it will be clearly visible how many transactions of the new type got into the block and how many transactions of the old type got in the block. You can chart it and make a little chart out of it. I don't know why why it wouldn't be clear. But if both markets end up having the same gas price which they should when people are arbitraging you should expect that both would be filled to capacity right like yeah that means that that means it's a great result you mean that there is adoption of the new type. You can also see which let's say that if you think about the you know with some analysis you can probably identify kind of the where the transactions are coming to and from like what like at the moment you have like a lot of websites which inject transactions for users like all the whatever you have like a you know landing websites and stuff like that. So what they do they they can just connect to your ledger wallet or what have you and they create a transaction for you inject them blah blah blah and so you can see how many of these actually transitioned but because they you know a lot of transactions will be going into the into this contract and so forth there could be some way of estimating you know whether the adoption is going on or is it just arbitraging both happening. Yeah I mean assume that the 1559 pool is empty then base fee should go towards zero and then I think it will be very quickly like full again at least until the gas price in in both pools are the same right like the problem is that you don't really have a counterfactual if you have both things living at the same time that you'll have. Yeah you are mentioning like AB testing but that's not really what you have because you constantly have this interaction between the markets in the gas price right. No but as I said before that the arbitrage actually does require you to upgrade that's what I'm saying. Yeah yeah but I'm sure people will have good reasons to upgrade the base fees. Exactly okay so if everybody arbitrages that's already a great result that everybody upgraded to the new informant type. But that won't show us who hasn't upgraded. Well yes it won't show us who hasn't upgraded and I'd like to point out that given the current realities of the chain you know the users are not the gas the end users the actual humans who tweet and use discord are not the largest gas consumers right the gas consumption is not really indicative it doesn't map you know there's heavy gas users there's people using a hundred a thousand times as much gas as other people and so those people are obviously going to be optimizing and and arbitraging I mean there are gas arbiters who exist in the market today they will be they will be doing this price manipulation not end users and it doesn't matter if there are these gas manipulators out there manipulating the price even if it's to our mechanistic benefit if MetaMask doesn't support our change for example. Yeah I agree with that I think we don't really see like the iceberg of transactions on the on the chain yeah. I mean if you if you look at the gas consumption right now so number one is probably Uniswap right Uniswap would be one of the first people to upgrade I'm pretty sure. Yes I think we could get them to upgrade and I think that this is sort of the conversation is well can we get can we get Uniswap to upgrade can we get MetaMask to upgrade can we get Etherscan to upgrade can we get Coinbase to upgrade if we can get these different community members to upgrade if we have a process for engaging them then we can really lower I mean we can talk about this risk from a mechanism perspective to a blue in the face but at the end of the day someone has to go talk to someone and make sure that they're switching. Well actually Meta said that 90% of Uniswap is bots but that's actually it's fine because the bots will also upgrade. And just one thing I'm not sure I understand so if you don't have a transition period how do you split the block size do you just say you know the gas limit is x and whatever type of transactions go in and there's no there's no carrot or stick it's just you can send either type and one block can be 99% 1559 transaction and the other can be you know whatever percent. I've never seen that proposal explicitly I think I understand your question but I don't know that anyone has ever proposed what you're describing so I'm not sure what how they imagine it to work. Yeah but basically what we're talking about right now right if you remove this transition period to see what happens on the chain how do you actually allow that right. You just you just have two buckets and then leave them as two buckets and then say have fun. Yeah exactly but but then for example how do you calculate the base fee. Do you take into account normal transactions to calculate the base fee you obviously can't do that because then you're kind of you know removing this incentive to use 1559 style transactions. I don't know maybe it's something I'm not understanding well but I think you just treat them as two different sort of pools and you look at them as two different look chains I think. Yeah in a way. Yeah it seems to me like maybe this actually adds complexity to the EAP whereas having like the separate like clean buckets makes it simpler but I'm not sure if that's just from a high level. My intuition is that you have to have two very clear buckets and if you don't do that it just doesn't work. Okay got it yeah that's that's what I'm thinking as well and then in the chat there's a couple comments that say you just leave it 50-50 indefinite like forever but the challenge there is that's actually more aggressive than the current proposal because the current proposal just the current proposal starts at like 0% 1559 and then gradually you know gets to 50-50. Oh the current proposal starts at 50-50. Well it doubles the block size and starts at 50-50. Got it yeah so there's basically there's immediate boost in terms of block gas limit after the heart. Yeah yeah I thought I thought it doubled the block size and started at 100 zero sorry that's my bad because if you did that right you could if you if you doubled the block size start at still 100% legacy transaction and over time allow more 1559 transaction then you get to this 50-50 spot midway through the transition. Right and the other option I mean I think that there is this sort of desire for engineering parsimony and I'm sorry I can't read the chat and listen at the same time but you know I appreciate the desire for engineering parsimony where you don't increase the block size and then what would have to happen is you'd have to ramp in 1559 and then ramp and then ramp down the classic transactions which would basically be throttling potential 1559 adoption which we don't want to do which is why we double the block size. Got it yeah yeah so is there so I guess it's kind of circling back to Alexi's proposal of just leaving it at 50-50 until we get more data you know on the on the chain. Is there like like the strongest objection to that is basically that it doesn't create an incentive for people to switch but if people already kind of want to support this we'll see we'll see how much I guess organic interest there is for it right. I think it will it isn't correct to say that there's no incentive to switch because if your dude that did create this extra new bucket which is the same as the current one anybody who hasn't migrated they are actually sort of using only half of the space that could be using so like all the the new smart people who were implemented the 1559 will be using the new new bucket while it's still empty. Yes I also think that making sure that 1559 transactions simply happen first is a huge huge huge incentive that will cause many bots to go to to apply it as well as exchanges and any number of people. And my my objection to Alexi's proposal is more that the data won't be able to tell us that 1559 was adopted or not because of a lack of counterfactual so we could set up some kind of experiment but I think this is not one that would tell us what we want to do like there's no way to interpret the data in a sense of oh people prefer 1559 to the old style transaction. So I think that that's super important to keep in mind I appreciate the technical point there but I think that the evaluation that has to happen just necessarily because of what you're saying is social you can't there is no data on chain that you're going to look at that's going to answer this question for you which is in part why we need to have this thing that Alexi is talking about if we could just use on chain data then we wouldn't need to give this social decision point but because this is this large social component to the problem we have to create this decision point where we where we will not have enough data I mean what well where we can't say positively that the experiment will give us sufficient data we have to run the experiment have this inflection point and then see what happens in the community and actually go out and do the analysis by talking to people as opposed to looking at what's going on on chain. Okay yeah I can agree with this as long as yeah yeah this is fine. And my my objection is that there there will be people that will adopt 1559 because there will be cheap gas and though and those early adopters will just run for it but we will we there is a large group of people that are slow and own adapt without some kind of hard limit and in the end if there's an option to keep doing 1559 there are people that will just keep doing it keep there's the option to do standard transactions there'll be a large long tail of people that don't support it unless there is some kind of hard and then yeah just for I guess completeness Mika had a couple comments in the chat saying his objection to an excise proposal is that it doesn't take we'll see we'll get any usable data from that change that there's not a scenario where we don't see both bucket fills under the other than a theorem usage going to zero and then what's the objection to just be willing to hard fork away if we see people not adopting I think we can't oh sorry go ahead I think we did set a time I don't I'm not sure I mean I'm not sure I understand Mika's point completely I I I think that Alexi's point sort of encompasses both of the that that response I mean we have to we have to set a time so that we can say okay we should expect a hard fork here to to to you know incentivize people to be prepared and to to just give people notice I mean from my point of view this is the two different I mean it goes back to our disagreements about about let's say difficulty bomb because I see the approach with the difficulty bomb actually mirrors what has been baked into this current EIP 5059 proposal it's essentially this what I think is kind of insecurity of the developers that you know we have to always kind of embed some sort of threat or cliff edge just in case that people don't you know don't do what we want instead of saying yeah we have a clear roadmap this is what we're going to do and we are basically secure we say we know what we're doing and we are going to do what we're gonna we're gonna do and you just basically if you you know if you're with us you with us you don't have to threaten you and that's kind of my approach to this and so I don't like creating threats in the future that you know if we you know if you don't do this we this is going to be hard for it and it's going to kill you or something like that I think the difficulty bomb is a great example because I think that what the difficulty bomb has actually demonstrated is that we make empty threats right I mean that that to me is the is the to your point I I personally believe we should be making threats but they should be legitimate threats not empty threats and so if we aren't going to commit to actually doing the thing to your point and I guess and that's also sort of an interesting again psychological difference like are we threatening people are we just saying that you know as the operators on some level as the as the architects of the system collectively the architects are telling you this is imperative and it must happen and so the the architects are going to do everything they can to make it happen be prepared or are or are we going to as architects capitulate to people who are I think we all sort of collectively believe are actually making things worse for everyone else out of either you know probably out of more incompetence than malice and I think that to your point Alexi that is exactly like there's a deep philosophical discussion that we have to answer here and we and I think that the difficulty bomb has actually set a precedent of the EF says that something's going to happen and then it doesn't happen and that's what I think we have to fight against yeah so so to summarize this basically if we say that okay it's going to happen in four whatever eight thousand eight hundred thousand blocks everybody understand that if people didn't have time to upgrade we're just going to emergency hard work everybody knows that and therefore I agree this is a completely empty thread because it doesn't save any purpose it just basically creates more work for us to everybody knows that if somebody puts the pressure said oh we didn't migrate you're going to kill aetherium we're going to do emergency far hard work and it's also going to look quite bad when I'm thinking about it this way just so I can clear for myself I the I am I am confident that there are people that will wait for the last time you've seen this with every hard work and every every deployment so there so there will be people that but you need to wait so they are in minority and then you can basically get through with it so you have to make sure that your threads are not empty space that's what we could say but like if you know assuming this is not an empty threat what's the worst that happens you know if we just put in the transition from the start right we get to the point where there's no more space for old transactions obviously some amount of like I don't know altruistic or smart or like you know incentivized people have upgraded to 1559 there are some people who are kind of stuck at that point and they can't you know do anything until they upgrade how you know how big of a I guess I'm trying to get an intuition for how big of a group that is and and what's the impact on them because it sorry to interrupt it depends on who they are if we if we you know if someone on this call just sort of like you know grits their teeth and provides the fork to metamask if someone goes out and talks to uniswap if someone goes out and talks to these important you know these people and make sure that they actually you know basically hands them the patch then you know maybe there are a bunch of scratch stragglers that are irrelevant you know I think it's really hard to say um you know we have to take a strategy that's much softer again to Alexi's point I think we have to just be willing to go out and talk to people and make the change as opposed to sort of decreeing it from on high and hoping that people then listen to our decree and when you say make the change it's like like I mean I think it's it's possible to reach out to people right like James has done it for the hard forks the catheters have done it I'm happy to help with that as well um it gets obviously harder if you know we have to implement it for coinbase and for metamask or whatnot so I what do you see as like the best or like most effective path there well I yeah I usually stop thinking about this problem at right where you ask this question but I think we will actually have to be providing um forks I mean when I say we I mean I think that anyone who wants to see 1559 implemented and is and has the means and is on this call also needs to be willing to go out and talk to implementer you know implementers of auxiliary services to make sure that they implemented as well you can't just talk to us here and and think that you're going to accomplish your goal you're not yeah so that's definitely something we can do like before the next call is just like reach out to various you know large users of the chain and or you know both individual large users and like kind of gateways for a lot of small users like folks like metamask and and exchanges and whatnot um and just kind of gauge you know how like where they're at regarding this um and that would give us I think some preliminary data around what they think is the biggest uh hurdle how realistic it is how much advance notice you know they need uh yeah that might be worth looking at uh funding with the funding some of that out because it's it's just it's a lot of um it's a lot more work I think people realize and having having done this with hard work it is a very high touch to even get a response got it which translates into that's a lot of an hours to be able to do yeah yeah um and just have to be mindful of time maybe that's something we can we can take offline and James I'm happy to follow up with you and Rick also or anyone who has thoughts about this yeah that's a good call and okay great so just coming back to the actual implementations then what do we see as the biggest blocker now so assume I don't think we have enough data to make like a big change you know the to the spec um if we assume that what we had was good you know was kind of the good uh conceptual thing it seemed like the transaction pool documenting better the transaction pool issues and mitigation was one big action item what are the other things that we can be working on to move this forward implementer wise the only other thing that I've picked up is the to update the spec so that the base fee is only you committed a maximum or sorry a minimum of one and is there value in going and doing test nets uh you know beyond what we have now uh does anyone see value in that or do we think that we kind of need this answer to the to the large users um before we do any of that I see some value in having a more public test net that uh we put a bounty on breaking the memory yeah I think we should do both at the same time I think we should continue forward with the test nets until we get up to like a ropston level test net um because I think that we're going to need that anyway to demonstrate seriousness and commitment and for other reasons as well um yeah so I also wanted to say that um there are you know there there are these test nets which are basically tend to be nexus of activity uh like say at the moment that syringe could be that was rep ropston before where you can actually see serious action going on in terms of like number of transactions people deploying all sorts of stuff there and so it would be uh good I mean if it's possible to get that kind of network to be running on this change to just show that it's actually working um yeah and I feel like that's that's really valuable but it's maybe like a step ahead like is there I guess that's a question I'm not sure but like is there value to having like a smaller public test net before using like a you know forking one of the larger ones yeah I think we should have a phased approach okay and then the step after that would be to get it sorry you kind of cut out there uh so the step after uh so the step after rick just said would be to get it into a yolo style test net and then the step after that would be put it into rock yeah so right now we're kind of like that the pre-yolo step I think you know in the over the next couple weeks we can do a sort of 1559 style yolo test net um and the question there is do we want it to be a proof of work test net it seemed like yes based on testing all the code paths does anyone have like a strong objection to that okay uh cool so proof of work kind of early test net um yeah is there anything else and I think you know like it seems like there's definitely like a month or so of work and we can have another call after that but like um you know just reaching out to the large users having the spec updates and just the clarification for the transaction pools and the increments setting up the test net uh and then obviously there's all the research uh Barnaby and Tim Roughcard and are working on as well in parallel um is anything else missing from this I think that's good until next time and then after after that stage then it's they getting more clients yeah but we don't want to do that yet so that's good for uh Mika says 2017 2017 18 is also like a thing you should be thinking about uh uh does anyone this is probably just throwing out there in the air question does anyone have a sense for how close it would how long it would take to get the transaction envelope the IP into a yolo test net basically adding it to like the current implementations right yeah by how how long would it take for us to get implementations or things into the point that we could avoid as part of a yolo as part of so would it include the IP 1559 or would it be on the base no it would just be the transaction envelope what does it feel like that's the that's the uh test net retest that peter the go the go at ethereum teammate oh okay that's what they're that's what they're using for integration basically free test net I think it would take about a week or two um if that was my focus to get that implemented and go ethereum yeah two weeks for basal I would say okay uh if we do that then it's pretty reasonable to start targeting to target 2718 or 1559 2718 okay um do you think we should do that now like I feel it might be valuable to just set up the test nets with our current implementation because it already took a lot of debugging to get them to work and we might be making some other pretty major changes based on you know feedback from from large users um so given that does it make sense to hold off you know kind of these large spec changes for now except I guess the sort of increment change which uh which is a small one and actually makes things run smoother but to kind of put 2718 and and the potential transition period change on hold until we have more data um I wouldn't put the 2718 into the into the 1559 implementation okay until until that it's 2718 is on some kind of you'll vote test net so it should be a separate it should be a separate uh okay a separate fork that then is trying to get in at that trying to be included in an in an upcoming work got it and then once it's that's been accepted then we can go back and do the 1559 but if we start on getting 2718 and into what it will be implemented then the sooner we can have 1559 okay I just wanted to draw attention to Micah's comment um where he mentioned set Peter from the geth team would like to see that implemented with the second transaction type not just the legacy type yeah so we can have 2718 implemented but not like on YOLO but not in maintenance and then wait until there is something to include but we can still have it implemented and in the form of what it would be like when it goes to main net at the 1559 team can adopt right right what what um everyone's talking about James is that you can't just put in you know we're being thorough you can't just have 2718 by itself you need 2718 or 2711 in order for 2718 to actually be work you need the second type and I'm just wondering is that's like out of scope for what we're talking about right now well I think it's presumably all the same people so we might as well talk about it got it but it is out of scope yes you have to have a second type be able to even have its transaction envelopes be implemented well strictly but to demonstrate its purpose I have it to verify that it actually is safe and that it works right we if you just deploy uh 2718 by itself you just have this weird sort of vestigial thing you need uh 1718 plus so the actual new transact another you need to have two envelopes and and what I and uh to because it's relevant now the thing that was confusing to me about 2718 was it wasn't clear to me how it treated the transaction pool it just sort of acted like the two envelopes were equivalent which I think more more times than not that's not going to be the case yeah so I want to be clear here that I'm not talking about maintenance where the stuff that you're talking about how 20s by peter and and 2718 and another transaction type going into maintenance all of those things need to be verified but going into yolo which is the pretest net that is used or testing client integration could put the just put the transaction envelopes onto that so at least the 1559 implementers can implement it and then test it and then have that before what's I I think we probably put in a dummy second envelope if for some reason it's too hard to implement my inclination would be to do 2718 and 2711 at the same time if there's some reason why we can't do that uh as from an engineering perspective then we should come up with a dummy shim for 2711 but I can't imagine that's significantly easier from an engineering perspective yeah because that because is a that is a the change is that 1559 the reason I'm getting into this the change is that 1559 will need to make in order to adopt 2718 is a future bar so if we can get 2718 to a point where it's uh it is moving forward and it's been well specified and the clients all agree and it gets to the it's on yolo's test net stage then the work of redoing 1559 to use that makes sense because we have what it would eventually be here yeah I agree with all that I think that they're too separate I think that 1559 and 2718 are separate the question is what goes with 2718 since we're basically excluding 1559 from that list and and either 2711 or a dummy a dummy transaction that was only going to work on yolo would also yeah I think that makes sense yeah and just in terms of priorities do because like Rick said it's kind of the same people working on this stuff um should we get the 1559 test net up and running kind of before getting this yolo 2718 assuming there's not teams that can do it in parallel what is the ian and abel kind of because I I would go with what they would want to attack first well there's like so first of all just like the increment change to the spec so I think that's probably the the highest priority because it's it's uh it's like a small change that has a big impact but then basically setting up a proof of work test net with uh with 1559 as is specified right now how long as is that a long process with the minimal changes and I don't want to make the decision for I don't have the enough I don't have the enough information on implementations and stuff I think I'm just bringing the point of there is this future bottleneck that we can get ahead of and so we should right yeah I guess it's not even clear to me if I would be the one doing 2718 um since it is you know a separate EIP but yeah I don't know exactly how we should prioritize that the immediate focus would be the changes that um just iterated yeah my bias would also be towards getting the 1559 network up like before and and uh you know getting that to work because if there's bugs found there like I think it's a higher priority at least you know for for us like for the 1559 kind of effort to fix the bugs in the spec um and I don't know and and maybe kind of other if other people really want to push 2718 they can start working on the on the test net as well yeah and I don't that sounds good yeah I don't know how it's funded amongst the various implementers but I mean we're definitely only at vulcanize we're definitely only working on 1559 just as a as a practical matter so uh wherever however that other teams handle that I mean I think that's on a per team basis but yeah yeah and I think for us at Pegasus it's just kind of the same like you know we want to focus on 1559 and kind of put the bulk of our efforts there and obviously the 2718 ends up being a part of that you know we'll support it and we'll do that but we we definitely can't be like champions for that um yeah and I guess with two minutes to go the idea of like funding and accelerating development was the last thing on the agenda um I know Alexi you you kind of mentioned that at the beginning I don't know if you had some specific thoughts to give some context like it seems like a lot of people would like to see 1559 happen quicker and and potentially you know provide funding to accelerate that my biggest question like for the people here is just what do we see as the biggest bottlenecks in term of like our execution speed um would money actually help there um yeah but I don't know if if it's I don't know I can stay on like 10 15 minutes if people want to chat about that but if everyone has to drop in one minute it's probably too big of a can of worms to we we could probably make it optional yes if anybody wants to stay on we can discuss this I'll have to go but I'll just make my two cents I think that the way that I mean I don't feel like development in my whole time working on this project I don't think that developers have been or engineering has been what's slow I think it's been communicating to the community that you need to have research like basic research like Tim Ruffgarden type stuff uh and and I'm sorry the other guy's name is French I'm not I don't want to mangle it uh stuff going on I think that getting people to understand that that has to be funded and that has to happen is like a huge milestone and then in a similar vein what James was talking about earlier about someone has to like go you know go door to door and make sure that integrations happen and funding that and I think that you know those two things getting funded is way more important frankly I mean we'll figure out how to do the engineering but but from a financial standpoint but getting the community to understand that we're not you know we're not incrementing a variable we're not incrementing a constant here we're really changing a lot of a large swath of what's going on and that requires change doing a lot more than just engineering I think paying for that is where the money should go and on that note I have to go so thanks everyone and I'll talk to you all later thank you I have to drop as well thank you thank you fine okay let's see who who is left let's see hi let's see who's left yeah I can stay Thomas you can stay let's take a round as well it's too late anyway so yeah so I just kind of wanted to ask you discuss a tiny bit because I know that Twitter is is basically really bad platform for trying to explain these things and people get offended very quickly like yesterday that was you know they started like lashing at each other and it's very bad you know somebody said the wrong word or like whatever so but I basically what I picked out picked out from the recent conversation is that they seemed to be this sort of expectation that okay now we threw some money in there's get coin grand you know you got 60 grand whatever how much I don't remember but where's the where's the results right how when it's not like that actually they basically say like when this is going to happen what is the blockers which is all reasonable questions and yes but yeah we need a way to actually explain as Rick said that where what are the expensive bits of it like it reminds me on the state-run project where eventually the reason why I decided to stop doing it is because I discovered that we would need a lot of this door-to-door people to go do in a really unexciting work of just basically just having meetings with people all the time trying to figure out who can migrate how they migrate and all sorts of stuff and obviously I couldn't do that yeah I don't think we can sort of address this because we we don't really have a lot of people from the other side of the from the other side of the arguments what's anybody else thinks I agree that like there's a lot of communications and and outreach that needs to happen I don't think that's an impossible problem I guess it kind of depends like how you look at it like I'm not an engineer so for me the engineering stuff looks harder than having tons of meetings and I guess like vice versa I think with regards to like the community's expectations two things I'm a bit I guess anxious about is one explaining the uncertainty of 1559 so you know a lot of people have you know brought up a bunch of issues today with the EAP to me it's still not like a done deal and I think there's a perception in the community that you know this is just it's all downhill from here and and it's really not so I think articulating that and making it clear and and because that also translates to funding right so like if the people you know funding their Githcoin grant are kind of mad that it's not moving fast enough they'll be really mad if you know they find out there's like a fatal flaw and I think it's really important to manage that expectation but they also they also don't I mean you know you look at the Githcoin grant it says 60 or whatever how much yeah I think it's I think it's 80 80 000 because the price of ether went up okay 80 000 yeah okay because somebody gave me this okay 80 000 sort of okay that's a sort of reasonable amount of money but then how many people can you you know hire with this kind of money yes how long then when you start thinking of these terms it actually turns out that it's not that much and it's it sort of underscores the point that the Githcoin grant is still not capable in completely funding this project and this is what people need to understand that there's much more if you really want to make it sort of happen at kind you know at a distant pace with people not getting stressed about like doing 10 jobs at the same time yeah you do need to basically splash a little bit more money into this and who is going to splash this money this is the question yeah and I think yeah some people have you know brought this up and reached out and I've been trying to the chat with some of them and one thing like you said that I've made here is like this 80 000 so right now how things are funded right there's this 80 000 grant which will probably mostly go to vulcanize you know modulo some other some other things consensus is you know we have me and Abdel and Karim that we can put kind of part-time on this but obviously there's an opportunity cost there we have paid customers and you know that's always like a prioritizing prioritization thing the EF has people working on this you know pretty much full-time so Barnaby which is which is great and then Tim Roughgarden has been paid kind of independently by somebody else but this means that you know the bulk of the work is happening by like one or two people by the EF you know and then one or two developers on both the vulcanize and in our side and and it's you know it will move along but it will be you know I'm not even sure slow is the right word but you know it'll be like not as quick as it could be yes and also there is another issue here is that you know sometimes there are certain things that need to happen before you reach your full speed so you need to do sometimes you have to do some preparatory work an example is this transaction envelope and things right it probably if it existed before this the things would be a bit easier and so what might be interesting is to have this understanding that you know kind of you not only have to wait for one year for all the work to happen but for all the people all the things to fall into their places all the pieces to fall into the places like you know when you launch a test net you cannot you know you cannot put that say extra hundred thousand dollars and make sure that the test net needs to be run for like 10 times shorter than that you actually have to wait yeah yeah and I think I think that's why you know like personally I've been like a reticent about going back on like all core devs and discussing this again because there's just like known issues with the eep that we're still you know addressing and whether you probably need more than one implementation to like start addressing them like I think it's been helpful to have base you and and and get kind of disagree on stuff and fix those but like we definitely don't necessarily need everybody to have this at the top of their priority list uh because we might find some other issues with it so yeah it's like this weird in between period where and and also what just Thomas is asking on the time I know it's a sort of joke but it's actually not a joke that yeah yeah I mean of course this we should not undersell ourselves and and then it applies to pretty much everybody in the core development that our work does cost money and probably cost a lot of money so it is it is okay to expect to be paid for these things and I know I don't know how exactly but the expectation is not you know it could it should be there the go ahead James I was just gonna say that an opportunity for public to for others to contribute funding in a meaningful way would be to provide funding to help the other clients implement 1559 so another mine uh an Ethereum by the if there was a the sooner those happen the sooner it can happen and those teams are already over right the all of the client teams are busy so to get resources to build to implement them in all the clients would necessarily make that the time yeah because basically what happens is that every old development teams at the moment they have their own priorities they have own agenda because partly some of them basically are thinking about how they're gonna you know get some money to pay the people right like it was request saying it on on on twitter is that okay some of them actually have to pay out of their own pocket to to to make things happen because they actually sort of like because they have a bit of money or because they they hope that they're gonna make something out of it in the future but just basically piling on to that and expecting things to go faster that's the you know that's not gonna happen I think that has to be sort of appreciation and respect for for people and you know expecting to slow in 30 grand and things happening in the month it's not realistic so if you have if you basically throw in half a million then you probably have can have these much much bigger expectations uh does that so if if someone came in and did if it did an implementation into another mind uh separate from the from the team my god give to the another mind team does that fit what you're saying or not what you're saying let's see I mean it depends and I mean obviously I don't want to talk for for too much but it depends on the ability to do that also depends on the code structure and how it's structured it's like sometimes in some of the implementations it has to be pretty much the people who are owning that to be make to making changes in some implementations it's easier to just come from the side and and propose the implementation for example what we are trying to do in our implementation we're trying to split everything as much as possible so that to allow people to come and do things on the side but yeah I don't know about the others and I guess it's also worth noting like this is kind of a weird eip because you know like for normal eip's client you know there's some team that's usually not a client developer that kind of does a poc implementation and then they bring it to all core devs and then the clients all kind of implement it you know quote unquote for free and and and it gets kind of moved up and this one is kind of weird because it's like applied r&d in a way so that there needs to be like more early implementation and it's also a much larger change than other eip's and it's not clear where like the boundary is from like yeah paying like a third party to provide a reference implementation and then paying all the clients to prioritize that on the roadmaps and the level of like alignment you need right like how do we get funds for like all the clients to prioritize this and is that like the model we want we're like if there's like this huge change that happens then we basically need to pay for n implementations and by we I guess like the community needs to find a way to pay for these n implementations I mean if we wanted to eventually come to a much healthier kind of model of development so that goes back to what we'll be discussing again in July as well if we want to come to much healthier model of development then there has to be expectation that any work which is which needs to be done done the money has to come from somewhere so far as far as I know that the the core developers haven't figured out the way to finance the development completely without some kind of subsidies and therefore for now the subsidies have to be applied yeah so I think like we already know we have a couple of milestones to hit and the first one is like freezing the specs then we know that we have a yellow testnet we know that after this we want to have a proof-of-work testnet or something and then we know that other clients will need to implement so a good way also to manage the expectation of a community would be to have like a reasonable let's say roadmap with some some kind of timeline because currently like there's a lot of momentum around eip 1559 like a lot of people even on twitter have said oh if money is the issue we can always like throw more but like a good way to say okay you can give us now but we might we don't we don't really need to throw like a half a million dollar on it currently is to have this sort of timeline and to say this will be useful later on once the yeah I guess after four months if people don't really see improvement or for the community improvement anyways is is very binary like either it's on the mainnet or not whatever is happening in between like they they don't really see as improvement so yeah having like yeah so the the thing about this kind of drip drip uh financing is also problematic because I think Tim mentioned on twitter is that the reason I'm talking about throwing in for half a million dollars is because uh in if you if you know that the money is there then you can actually really hire somebody to do the work for a reasonable time you don't have to keep everybody on sort of zero hour contracts and stuff like that and yeah we're just saying that anytime when the money runs out you're fired and I think that's that's exactly the situation we're kind of in with the get coin grant to be sure right like we have you know like a not insignificant amount of money but you don't know if it's gonna last you three months six months 12 months right and and obviously the rate at which you spend you know you kind of want to be conservative on it so yeah yeah that's why I mean that's why sort of the model work which works the best is either you have a very reliable counterparty like let's say you see a foundation uh that basically has a contract with you or something that they give you money as long as you don't do anything really stupid uh and or you have a full of money in front of you and and I think yeah back to Barnabas point this what I find I think the hardest is like yeah how do you you know for those intermediate milestones like setting up a proof of work test net and whatnot like what's what's the right amount of funding and of you know like should we fund the client implementations for all the clients to join the test net which might not work right and if not how how many do we fund and how do we choose it like I mean you you don't you need you don't need to be so fine grained I mean this project is not so huge that you have to be like obsessing about fine grained details so what you could do is simply is that let's say that you say okay for for this to happen we need you choose the implementations let's say we need to go Ethereum nevermind whatever open Ethereum and what else I mean choose them and let's say everybody should hey get one developer on each team implementing this and get to make sure that everything kind of works for how many months whatever and that's basically it I mean really rough idea so but then you know that in each team there is one person doing this job and of course they can do some other things at the same time you cannot stop that but at least the money is allocated you know that it's there that's so that's what I would say I mean if this project was for five million dollars then of course you have to have an extra scrutiny about you know where exactly this money is going but for I mean even like if it's a half a million I don't see the point of obsessing over the details and I guess justine has a comment around like you know the precise funding needs for fast tracking this like I I'm not sure is there like a way to fast track it with like it seems I guess I don't know that past like one developer per team I think there might be like diminishing returns and I'm not sure if like the funding actually fast tracks things are just put something like a normal pace like we yeah the the things that would fast track would be funding the community outreach stuff so that there is the group of people that are ready to go out and make sure people are adopting other things would be having bounties for forking men a mask and put in and implementing 1559 friends acting support so you just put up a list of major all it's a major things and say hey anyone that does anyone that that implements 1559 input it's this bounty um and then the last one would be after this kind of round of of rnd that's happening for the next month or so when it turns into all the clients need to implement giving them support through whatever however it is for each client team is the kind of the last I mean I don't know I don't know if it's the issue with the kind of with the bounties or something like this but I I just like to simplify this to because essentially if you say that there is a money to pay let's say one developer in each team for six months right that's it you know and they suppose this developer supposed to be doing eip 1559 they can do basically if they if they're not coding all the time they can do other things testing right more testing doing spinning out test nets talking to each other whatever I mean whatever they could be whatever they can do to make this to make sure that it happens really and if they're doing something else as I said it's fine as well you know you know would you be like really upset if they spend some money on improving performance of the client as well and I think I at least from our perspective like there's value and saying one developers like you know paid to do this and it can help just like prioritize 1559 about other stuff on a day-to-day thing obviously like I'd be curious to chat with other client teams to see that but and I'm happy to take that action item to like reach out like Thomas I know you have a bunch of comments in the chat that I know you can't talk right now but I can set up a call maybe with like another mind I don't know Alex I if you want turbo gets to be part of this but to just yeah talk with the different client teams see what's like a reasonable amount of engineering work and and and yeah can we just get like one one client one person on each client team to put this at the top of their priority list and then the other kind of more ops work as well and what would be just like a rough amount for that okay somebody needs to save the chat before we go it'll save it'll save I'm recording on zoom so it'll save okay I'll make sure I'll send it to Griffin so Griffin does a transcript based on the zoom recording so I'll send him the chat as well okay cool no I think yeah I'm also doing the notes for detailed notes for it and I have the chat saved with me awesome yeah so I'm happy to take that as an action item to follow up with the different client teams next week and see what are the you know possible uh like what what we think makes sense okay yeah um um anything else anyone wanted to discuss I think that's pretty good yeah um yeah sorry go ahead yep just one last thing I want I wanted to throw out uh in the beginning of the call we were discussing about having the transparency on their funds so I have just created the sheet for reference and we would be like sharing it with people who are interested it will include all kind of outgoing transactions will be recorded here okay let me just share my screen so it's in the recording okay so we can see here um yeah like we said the two only transactions so far have been for vulcanize um yeah cool anything else okay then uh yeah so I think there's a pretty huge amount of work to get done in the next couple weeks uh so some changes on the spec um trying to get a test net uh which would work out um following up the conversations around the 2718 uh there's some r&d work and then the whole funding discussion um yeah that's a lot uh and I think it makes sense to probably have another one of these calls in like a month to give an update on on all those things okay thank you very much bye of course thanks everybody thank you thanks