 And we are recording. So welcome everyone to 1559 call number eight. Yeah, we have probably a lighter agenda than last time. So there's only been two weeks between the calls rather than a month. So we might not last the whole time, but yeah, first thing on the agenda was just status updates from the different research and implementer teams. And first one there was transaction pool management. I don't know, Ansgar, did you have any updates? Do you want it to share? Yeah, I mean, I can give like a very brief update. So basically after the kind of this preliminary, right up from last time, we kind of took a step back. I mean, of course, I'm the main one in the team company working on this, but then we also, for example, we had a call with Martin from Geth earlier this week, and we went through some of the details of the Count Geth implementation just to check if our like, well, I think I have some camera issues, whatever. Our assumptions were basically good, and it seems like we actually kind of were a little bit off on some of the small details. So for example, it turns out for the minor, there's actually like some significant rebuilding of the sorting already ongoing after every single block. And so it seems like that's less of an issue now for 159 to just basically also use a similar mechanism. So it seems like basically it's a little bit, probably a little bit more optimistic or a little bit simpler than it looked two weeks ago, but yeah, going forward. So the idea is if the next call only ends up being in a month or so, I would hope to basically have a full write-up including some, maybe some simulation work and so on done on like a proposal for how to do the sorting. And I think last time you were saying that it all looks like this should be very doable. And I would preliminarily agree that seems like some details are still to be determined, but it should not be like sorting should not be an issue going forward. That's really good news. Yeah, thanks. Yeah, and it's great that like, yeah, the get team is already or the get code base already resorts every block for minors. That's really good news. Yeah. So with eviction, so what are your current thoughts there, Ansgar? Well, yeah, sure. So I mentioned the kind of the mining because that's where there seems like we were a little bit off on the exact details of get. Although to be fair, like we, again, we talked with Martin, he's more of an expert on the mempool side of things. We also like when I wanted to reach out to the people within the get team who are more like responsible for the mining side. So, but I'm reasonably confident that this is actually now and are very correct. Yeah, and on the eviction side, I think like basically the only change for now, like that I had so far was that I think I'm a little bit like more optimistic on just basically using a very simple heuristic that might be very inaccurate, but like just precise enough. So we like, because again, for mining, it's really just important that the top end of the mempool stays very precise. And then if at the low end, there's like some eviction that's not ideal ordering wise, that doesn't really matter. So you can get away with like very, very efficient implementations like only resorting once every so often and so on. So I think the, I think the main focus on the eviction side really will be testing it like under a huge variety of different base fee behaviors just to make sure that under all of those, it hits some minimum stability basically. This is what that is. What are some heuristics that you were thinking on? Well, again, something basically where it's like a very simple way of calculating some expected future and effective mining tips. So for example, you take the current base fee and then you take, for example, historically, you always maybe the mempool always keeps track of the variability there within the last, I don't know, 24 hours or something or there could be different approach. But then you just do like a very simple, so for example, you just do like 50% of the current base fee plus like 25% of one sigma above and one sigma below basically just to give you some idea, right? Again, that might be as simple as that sounds like that might already be good enough that you have like, it's not just the current base fee, but you also like take into account like a higher one and a lower one or something. And then you just update that maybe even just once every so and so many blocks. And as long as you can do that efficiently and as long as basically that is good enough that like on the high end, your sorting is never, is still absolutely like perfectly precise. But again, right now I'm not confident enough that this would work already, but that's basically that would be like currently one of my candidates for like a very simple puristic that might be efficient, but still good enough. Yeah, if that makes sense. Yeah, that does make sense. Yeah, cool. Anything else on the transaction pool management? Okay. Next up, Abdel, do you want to give a quick update on the large state testnet generator and where we're at? Okay, yes. So we currently have set up the new testnet. So this will be a provok testnet. And the goal is to have a state comparable to mainnet. So so far we have generated 100 million accounts and we are now using a smart contract and we aim to generate 100 million entries in this smart contract. And yeah, when this will be ready, we will share the URL of the different nodes and the block explorer and the et cetera so that other clients can think of this new testnet. And yeah, that's pretty much it. So we have four nodes running and the generator is still running and I will share everything when it will be ready. That's it. Cool. Yeah, and I think for that, once we have it up in, once we have the testnet up and running on Beisu and then we get Nethermine and get thinking to it, I think we should probably just like schedule the time to then spam it with a ton of transaction and gather metrics from all three clients. Hopefully we can gather metrics and it doesn't just fall over. But if it falls over, we fix it to try again. But I think, yeah, if we have at least one or a few shots of like saying, look, we spam the testnet for two hours with transactions and like the nodes stood up, I think that's like more than the worst case we'd see on mainnet because in two hours, the Beisu would probably go up like, 100,000 X or a million X and it's just not realistic to even do such an attack. Yeah. Hi, it's Rami, I just joined it, sorry for being late. No worries. Any thoughts, comments on the testnet? No, I think it would be important to make sure that every single node is publishing transaction from their own respective transaction pools. So we know that we can not only consume the transaction node, but also that every single one can generate them. Yeah, so we have, we set a very low difficulty so everyone can be a minor on this testnet, so. Oh, no, so not even, it's a minor one thing, but also that the client can be a source of broadcast for the transactions. Yeah, yeah, that makes sense. Yeah. And I think the tool, Abdel, I hope I got it. Yeah, it's client agnostic, yeah, yeah. You can use it on every client, so. Cool, so yeah, so maybe when we schedule the things like every client can kind of spam the network, you know. Yeah, yeah. We've been using your tool already for spamming the network when we were working with Bazoo Network on this current solution. So we were pushing transactions to test that problem that we've reported before, and it's all fine and we can broadcast it. Nice, nice, nice. And yeah, I will update also the web frontend to add the list of the different nodes and the type of the node. And I will add, if you give me some URL of Netermine nodes, I will add them to the frontend so that the user can choose to which node to send the transaction, yeah. Perfect, yeah. Cool, so yeah, I suspect, yeah, I guess, I suspect we'll probably have the testnet filled up, you know, sometime over the holiday. So early January, we should be able to, you know, share the information and then it might take, you know, a week or something for people to sync to the testnet because it's big. And then yeah, sometime in January, we can probably run this kind of spamming test. Cool, any other thoughts, questions on that? Next thing on the agenda, I think I just copied this over by mistake, but like EIP 2718, I think we should wait until after this testnet thing is done and then add 2718 support to all the specs. It won't change anything for performance, but at least like we'll get the actual testnet data before we have everybody changing their specs, does that still make sense for people? Yeah, sure. We still want to run some transactions spamming after adding it, but we should adjust the formality. Yeah, cool. There's two people in here have a preference for feelings about SSZ versus RLP since it's almost certainly going to come up again. No preference. Sorry, I couldn't find the mute, but then the only thing I guess is on the last Alcord Devs, we talked about maybe doing SSZ as the Dev P2P layer first and then bringing it to consensus. So I don't have like a strong opinion, but I wouldn't want to like go against, you know, the rest of like Alcord Devs on like having SSZ in 1559. Yeah, if that's going to be a blocker, yeah. Oh, actually, sorry, as for, as for EP1559, obviously I would like to keep it as separated from other things that we are adding as possible. So generally adding to 718 and adding SSZ, adding SSZ itself may add something like two months delay to 1559, sir. So maybe for that reason I have strong preference for RLP. I saw that you were talking about 2718 in general, but yeah, if 2718 is bundled with 1559, like it has to be with 1559 than like have strong preference for RLP. And generally you have a preference not to bundle EP1559 with 2718. Wait, you want 1559 without 2718? Ideally, yes. Even though 2718 was going in Berlin? As I say, I would like to keep EP1559 separate from the Berlin discussion. Berlin discussion can be delayed massively. I mean, it's, it keeps being delayed. I want to keep EP1559 totally separate, if possible. I mean, obviously if you have the 1118 already deployed then it would be no problem. Yeah, okay, okay. Okay, yeah. So it's, I'm like 98% sure 2718 is going to make it into Berlin. And so, which means that we'll want 1559 to be 2718. And if 1559 is 2718, then we have to decide whether we're going to do what we said in core devs, which is talk about SSE again after Berlin. And it's going to, that means that discussion is going to be around 1559. Generally target EP1559 before Berlin. And this conversation, I would keep it like we don't have to think about Berlin because Berlin might be delayed. I mean, I see what's happening there. And there is every core dev call, we are adding one or two issues that are highly contentious recently. Like SSZRLP is, it will take time. The 82718, it will take time. And people are not on board and they don't feel like there is so much of a push from Berlin. So I would just keep it separate. I mean, on the Berlin calls, I call core devs, I'll be pushing for Berlin to happen as fast as possible. On the EP1559 calls, I would aim at pushing for EP1559 to happen as fast as possible. And with both of those attempts successful, we can come very happily together with everything in place, but I wouldn't like them to wait for each other. So I think, I see. Yeah. I think that makes sense. I suspect that like we're coming towards the end of Berlin and there's like a very high probability that it's ready to ship before 1559. That might be wrong, but assuming it's not, I think then the like path of least resistance is adding 2718 for 1559 because we'll already have 2718 in the code basis to handle 2930 and not doing SSZ. Cause I think for SSZ, like Peter from Getz point last call was we should probably do it on DevP2P. We're going to find bugs. If we do it at the networking layer, we're going to fix those bugs. That'll take six months, nine months. And then, you know, maybe like once that is done, we're ready to actually move it into the protocol or a consensus layer. And I think that's fine. So I, yeah. And if for some reason, you know, there is a decision made on all coordinates that we switch everything to SSZ now, then like, you know, we'll have to do it for 1559, but I wouldn't want, I don't want to take like the path that's like opposed to all core Devs. Like if everybody's switching to SSZ, you don't want to be like RLP and then slow things down and vice versa. I don't want to slow 1559 down because we want to do it. Yeah. Yeah, I mean, when I say there is no preference for SSZ RLP, it's because I know that we already have it, right? But also know that implementing SSZ and understanding it and testing it took me proper time. I mean, it was not a trivial task. And I think that there is no chance it will be faster than like a few weeks on GIF side to properly test it and implement it into the code base. Even if you have libraries for it, I mean, okay. Unless you, maybe I'm not thinking about the fact that Prisma has the core library for SSZ and it might be more general. Like because our approach was a bit more like optimize and make it not so reusable. So it might be that the Prisma library is very reusable. Matt, I was under the impression that you were kind of working like with the GET team together on 2017-18. Could you maybe give like a very brief summary of what your take is? What do you think? What's the time I'm there? What, how does that side look? I mean, I feel like Berlin is locked in. It's just a matter of getting the client to test and that's tested and decide on a fork block. I really think this is gonna happen in the next three months which I believe is still gonna be far ahead of 1559. I think the original point for the question though was just discussing like we decided to go with RLP over SSZ for the Berlin hard fork. And this is a discussion that is gonna come back up after Berlin because it is a desirable thing to have SSZ at the protocol. And unfortunately, the more things that we add in that relies on RLP, the more complicated it may be to do SSZ at some point. How, like how much of luck inward would a decision for on the 1559 side for RLP versus SSZB? So let's say we go with RLP for now but then it turns out that MainEd will already like will build us to or will 1559 to arrive with SSZ instead? Like how, how much for delay would that cost on the 1559 side? Yeah, I don't really have a good feel for what it would take to transition 1559 from RLP to SSZ. I think that generally we have complete control over these things in the protocol. But one thing that we are introducing as like an external API is how we sign 1559 transactions. So if we sign those with RLP and by switched SSZ that means we also want to change how we sign then that's something that would be very difficult to change because we would already start having wallets and external providers adopted that tiny mechanism. So when you, when you say SSZ, you mean even serializing transaction objects with SSZ? Yeah, serializing transaction objects and using the Merck causation functionality to create the roots in the block. Oh, that's a very big change. It's a big change because it affects the, like all the free components, all the smart contracts. This will be like a massive delay if we go for it. And they, you know, now we already have a method of expressing keep 1559 transactions in a traditional sense with just like two additional fields. And I feel like this change is far, far from being significant for adapting the current processes. So yeah, like SSZ would be one thing that would probably delay keep 1559 the most from the current state of things. Yeah, so I think I would very much agree in the sense that I think 1559 should not try to basically be taken as an opportunity to also push ahead other changes with together with it. So there's no reason why 1559 should also try to push ahead SSZ, right? If it's ready and SSZ is not then, yeah, of course, that shouldn't be part of it. We are in the unfortunate kind of position, of course, to make basically like preliminary decisions on what we assume may not we'll expect from 1559 once it's ready. And so I don't know, I think we basically have to try to keep like the effort minimal that it would kind of take either way to walk back on this decision if it turns out that we made incorrect assumptions. So I don't know, for example, for me right now seems like the maximum likelihood situation would be that we arrive after 27, 18, but before SSZ. But of course, could be either way, could be like that we arrived before even 27, 18, I don't know, seems unlikely, but possible, or even after SSZ again. And in all cases, of course, if that delays one to have nine by months, that's not a good place to be. So yeah, I unfortunately don't really have a good solution, but yeah, that is important to kind of keep. So the reason this keeps coming up is because while I agree with, I forget who just said it, that it's better to not bundle things like get SSZ in first and then switch transactions over to it. Historically that has never worked with Ethereum that I know of. The problem is is there is a subset of the coordinates who do not like including changes unless they are needed for something. And so adding SSZ before SSZ is needed, there's a good chance that means we will never get SSZ in. And SSZ by itself gives us a lot of big wins down the road, which we'd like to have, but we can't get that in until we have something to put it in with. And so no matter what, I think that if we want SSZ to go in eventually, it has to bundle with something. And to stress the point that my client made, the longer we wait, the more painful it will be to do that because we'll have more and more stuff, particularly more and more things that are being signed by third party tools. And I guess that's kind of a core depth discussion though because there's not just 1559 involved here, right? Like for example, there's a count of structure is the other one, right? So there's like this meta problem of like, yeah, where's the line for SSZ? Where do we want it to be? And for sure, wherever we draw that line is going to slow down every other feature by like, I don't know, call it three months optimistically, right? And but yeah, I don't think like we can do much at the 1559 level to change that, right? Like we can say, you know, on the Core Devs call this is the stuff we maybe want before or this is the stuff we absolutely don't want before because, you know, it'll be such a big piece of technical debt to deal with that it's not worth it. Yeah, but because it just feels like there's so many things that are coming in that might touch that that we probably want a higher levels solution than just do we do 1559 or this EEP with SSZ or not? Yeah, Mr. I'm okay with not discussing it here. It sounds like the gist I got is that people are very hesitant on anything that will delay 1559 and I generally share that sentiment and I appreciate it. Yeah, just my perspective from trying to deal with some of these things. And I know this is just coming from an opinion place not from like a knowledge or fact place but that there probably will be something that needs to do what you're saying Micah but it probably shouldn't be 1559. To do what exactly? To put the we need SSC for this. There'll probably be something where we kind of have to decide that we need it even though technically we could maybe not need it. The EEP to Burge. Yeah, but 1559 shouldn't be the one to do that. But it is a good idea to be thinking about what should be in the down the line. Well, let's let's keep it down on the old core depths. Like here our goal should be to deliver EEP 1559 and I think the best thing that we can do is to ensure that it's ready and tested for any of those scenarios. Like whether it's with 2718 or without whether it's with SSC or without. And we have one solution that we're testing without those changes. And it means that we can move it all the way to the end where we have a ready test that we've all the clients saying we are capable of handling EEP 1559. And this is the spec that we are working with. So the people who build tools can already start adjusting their tools. And we can show them also like two alternative paths that here is the simple path now. This is how we have to adjust the tools. And these are the alternative paths. Pay attention to what happens to 2718. Pay attention to what happens to SSC because maybe you'll have to adjust those tools a bit more depending on whether those go before. But people will be more prepared. They will already start looking and it's implementing first version. And I think overall, everything will go together faster. Yeah, that makes sense. So, tabling the encoding discussion for now. Was there, I guess, Barnaby, Thomas and anyone else? Or Ramil, do any of you have a base you wanted to share? So they, because I think that we're planning the next goal for a 14th per generation. I spoke to Michael a lot recently and he was working on this analysis of the potential attack scenarios when you like, not really attack on the network in general, but just the attacks where you slightly modified the base fee. And this is because we are exploring like the cost of manipulating the markets if you introduce the gas markets to the equation. And he has the lots of results already calculated with various different network parameters, but was not ready yet to share it today, but he's very confident about sharing it on the 14th. So we'll be able to look at this Jupyter notebook numbers and all these charts and show you actually how it behaves when you want to push the prices down or push the prices up. That's really cool. Yeah, looking forward to seeing that. I would like to share update about pull request in the guest. So we review it with the comments and yeah, from Abdul Hamid. And we are going to start working on it on Monday. Cool, that's great. And I think, yeah, once I guess once those are addressed it might make sense to get like a more, you know, thorough review from the guest team. I know that like basically the quilt team has shared it with them. And but I think once we have the code in a spot where like it's up to date with the latest spec, yeah, it'll be valuable to get their thoughts. And I think one thing I believe Joseph you shared this with me was that the guest team would like to see it kind of split up between the consensus changes and everything else. Was that right? Yeah, yeah, yeah, 1559 could be, yes, phased in two phases. Yeah, one with consensus, just the consensus changes. And then the second one where the mem pool changes and you know, other non-census changes would be. That was a suggestion from Martin. Yeah, just to clarify of course, and of course that's what you were saying as well, but just to clarify. So it's not, of course, about like an actual two phase approach, but it really just is like a logical sort of split into two PRs. So they still would have to arrive at the same time and I depend on each other, but yeah. I guess for Mill, what's like the best, you know, does it make sense for you to do that now? Do you want to rebate? Do you want to like address all of like the spec level comments first? Yeah, I think whatever you think is best to get to that part. Yeah, so I think, yeah, so we will update to the latest spec version again. Yeah, and yeah. Okay, and then yeah, we can look at split in yet into two PRs. Yeah, actually it's not clear for me for now on 100% how to implement that split in, but I think we can discuss it later on the chat. Cool, yeah, that makes sense. Anyone else have updates you wanted to share? I just shared on the chat paper that my co-author has presented in a workshop recently. It's very preliminary work, but it's kind of looking at 1559 as a dynamical system. So trying to get some ideas on how fast it converges, what are the, let's say, guarantees that we can find. And perhaps using that as a springboard to look at the more controlled theoretic questions of, well, how fast should the updates happen? I know, Tim, you've sent out a call to people who might be interested, and I think this work might be interesting to them as well. And what I discussed also two weeks ago is a follow-up to Michelle's notebook on the transition. I have a pretty final draft, just getting into the last review, and I'll be ready to share it either end of this week or next week. Nice, cool. Anyone else have updates? If not, I'll just kind of share my screen real quick to go over the checklist, but I think we've covered a lot of it already. So just at a high level in terms of implementations, same teams are working on it. Open Ethereum worth noting that they have a job posting out to hire somebody full-time to work on 1559. So if you're a Rust developer and you're interested in working on 1559, please apply to, it's a posting through Gnosis, but to work directly on Open Ethereum. Aside from that, so just in terms of the open issues, denial of service risk, actually I've been thinking about the DOS risk more, and I suspect that 1559 might make things better and not worse. And one of the reasons for this is that today on the network, if you just spam the network, your cost is constant for doing so. And if you're like a minor deciding the DOS, the network, you can include your transactions kind of for free in your blocks. Whereas under 1559, what's nice is like, even if you were to spam the network and aim to not increase the base fee to keep blocks just a hundred percent full, the rest of the demand for the network will mean the base fee increases, and that means your attack will get more expensive over time, which is a property we don't have today. And also it kind of blocks that whole of like minors being able to DOS the network for free. So coupled with like stuff like 2929 and just clients being generally more resilient towards like the large states, I think like it's not as big of a risk as it might have been thought to be. So yeah, let's just quick update there. It's not formalized at all, but it's just like my intuition of how it would play out. Yeah, so Tim, exactly what Miha was working on, he was analyzing the cost of attack when you want to spam and make the network, make the blocks field by just publishing transactions with very expensive transactions. And obviously very quickly, all of the rest of the network stops, including their transactions, and you're the only one who has to pay for that. So just to share some of the results that we've seen, like the raising base fee from 50 to 500 required some like pretty solid participation of the miners like at levels of around 40 to 48% of total mining capacity. And it was with some ranges of success ratio like between zero one and zero two. So 20% of success with cost of around half a million dollars for 10 times increase in gas prices. So yeah, the pushing it up was quite inefficient, quite expensive. But also like it would be great to see how the network behaves if miners actually do not participate in this kind of attack, but people actually push the transactions. Yeah, looking forward to seeing that. But yeah, I think this was like, I guess for me the biggest showstopper potentially for 1559 and I feel like we're heading to a spot where it's not a major issue anymore, which is great. So transaction pull management, we already covered this. So we're working on the solution. I think we should be good there. The base fee update rule. So like Barnabay just said, I've been reaching out to different people to see if we can improve on it. I don't think this is a blocker for 1559. So worst case we just ship it with the current update rule and if somebody takes a year for somebody to spend time to come up with something better, we'll update it in a future hard fork or when we go on it too, but it's not a blocker. In terms of testing, we haven't made a ton of progress there, but I think it'll kind of resume once 1559 kind of is in more of the all-court devs process rather than this sidetrack. But, and we wrote Abdel, I say we, but Abdel wrote a couple of EAPs for the JSON RPC spec and there's more to do. It's not rocket science, it's just work we have to do, but I don't think there's a ton of value in doing it now because of just how early it is. And in terms of test nets, basically I think we're combining these last two into one that were the last two things we haven't tested. So just like a multi-client proof of work test net and then a large state test net. So if we can get the two of these done, that'll be great. And it feels like in terms of R and D, there's a lot more stuff that's gonna be coming, but I feel like Tim Ruff-Garden's analysis was like the last big blocker that we had. And now, I'm pretty confident we've done more analysis of 1559 than probably any change that's gone onto the network and they all modulate some small issues, everything seems pretty positive. And finally, just in terms of community outreach, we've been a bit slow of doing another kind of round of feedback. I think personally I would do like maybe a more aggressive round of like reaching out to projects once we have another test net that's like more usable and that we can point people to and have some documentation for it. Yeah, because in the meantime, it feels like the main thing people were asking us on these calls was like, when can I try it out? How can I try it out? So I would just wait another few months until we have something a bit more stable that we can share. And yeah, that was the last thing on the agenda. The next call I had tentatively put January 14th because I think it's, yeah, we have an all-core devs call on the week of the eight. So it's the off week from that. Does that generally make sense for people? Yes. Cool. Anything else anyone wanted to discuss or bring up? Okay. Well, yeah. Thanks for making the time, everybody. Thanks. Bye. Bye. Thank you. Bye. Cheers.