 Okay, so hey everyone, welcome to the seventh now implementer's call for 4.44, a bunch of things to cover today, as usual, testing specs DevNets. To start though, George is going to give us an update on the large block testing because he has to hop, yeah, George, do you want to recap what happened in the past week or so? Yeah, thank you Tim. So the update from last week is that previously we were blocked on providing transactions that are bigger than 128 kilobytes because of the TX pool limit. We got plugged into the Flashboots builder, which itself is connected to a good amount of relays who are in turn plugged into a good amount of hashrate on girly, which let us submit bundles which had one megabyte transactions. So we got a few blocks with one megabyte transactions. We've got a bunch of blocks with 10 or more 128 kilobyte transactions. We can do it reliably. So now we're at the point where over the weekend we got some blocks that were bigger and some people were monitoring their networks, etc. So now the next step is for us to improve our bin packing for the bundle, i.e. kind of like optimally bundle as many, what, like one megabyte plus one 512 kilobyte plus one 256 kilobyte plus one 100, you know, whatever, to basically make the block as big as possible. So that's the one thing that we need to improve to make the benchmark better. And the second thing that we'd like to improve is how reliably we can get multiple bundles included in a row, because we get outbid by others. And I just haven't figured out yet why we're getting outbid. So if there's more hashrate that we can get to, if there's more stake that we can get inside of the system to make bundle inclusion more reliable, that would be really nice. So that's the progress update. Another thing that we'd like to do is to start running this for the 484.4 dev nets like proper, because the code path might be different and also more importantly, because there's this mempool verification, there's a bunch of like new verification code for KSG, which we're not touching, obviously, right now with the call data benchmarks. So TLDR, the benchmark has been done at a small scale. We're trying to make it at a bigger scale and we'd like to try it also out for 484 in the near future. And you have to answer any questions or brainstorm or discuss if there's anything there. So we have a third of the network running MEV Boost on Gordy. Do we know another big chunk of validators who is not currently running MEV Boost? So I know client teams have like a fair amount of validators. I assume the ones who are involved in this are running it. But I don't know, does anyone have an idea of who we could reach out to you to add a bunch more? So all the client teams together should be roughly 80 or 90% of Gordy. Definitely not all the client teams are running. As far as I know, just the EF and I would, if I had to guess it would be Prism running it, but I don't know what the other one. Fegri is running 5k validators on Gordy, but we're using Flashbots. So you were, so the beads were coming from Flashbots really on Gordy? Yeah. And I think that's what we're using for this. So you would already be part. So we were part of it. But if I understand correctly, I think Techu has more than 5k validators. So there should be more that aren't running it. Yeah, but not all are running under MEV Boost. I think no need for us to go back and forth on this necessarily. But if you guys know more people that we could put on Gordy and MEV Boost, that would be fantastic. So I just message team that I've been bidding with 0.1 ETH. It doesn't need to be more like the bids. Like I looked at all the historical bids on Mainnet and like anytime. Yeah, anytime I get out bid, it's because somebody like just looks at my bid and outbids me. So it's not like, it's not like, it's not a static system. Like other people are also trying to get their bundles in because they're screwing around. But yeah, like if people want to send me a bunch of like girly hits, I'd be happy to take it with. So right now we're only getting these big blocks in through MEV Boost. Yeah, only MEV Boost. MEV Boost generically or do you have to hit, they have to be connected to like flashbots or a particular relay? I'm submitting to anything that conforms to the ETH and bundle API. Okay, so if we ran an analogous experiment on Mainnet, we would likely be able to hit a fairly large amount due to something like greater than 60% being connected to MEV Boost. Of course, of course, yeah. Yeah, this is designed so that it's kind of like one click change for Mainnet. And that's why to me it's important that I get the bidding algorithm proper and the bin packing algorithm proper so that I don't want to be overpaying for all my blocks on Mainnet. Okay, yeah, and we can look offline if there are clusters of 40 validators that we're aware of that we can reach out to. The main ask, so the main ask from the group is to just make sure that people's metrics APIs are up and if people are going on this or if there is like some of the blocked or whatever like let us know because I think this week by the end of this week we'll have like we'll have production as this or whatever you want to call it. And then we want to start hammering like reliably like all the time. Got it. I would like to run this for like 10 minutes, for example, like I don't want to run this for like five blocks. Perry and others, do we have initial metrics insights? Like things seem like we're gathering the data that we need? Yeah, that's actually one of the asks I've shared. Andrew shared a couple of dashboards earlier today. If you guys could have a look and let us know if something more needs to be added then we can work on that over the next week. But right now based on what we've seen there's been no like there's no cause for concern, like nothing changed, nothing was unexpected. To be clear, I don't think that we should be using the data that we got so far as signal because we haven't run really anything serious, right? Yeah, we just need to know that we're getting the data. Yeah, yeah, yeah. Of course, just making explicit that the fact that we got, you know, three one megabyte blocks or three two megabyte blocks actually like with one megabyte transactions and over the weekend doesn't really mean anything yet. Yeah, and I think I got some of these metrics dashboards. Thank you, Perry. Yannick's attestation analysis that he just put out using some of those tools. This is all on-chain data that might be really valuable as well. I just shared the blog post. Thank you. Sweet. Anything else on this? Okay, we had a couple spec PRs issues from last time and new ones that we want to cover real quick. First one, DapLion had this PR 3141. I don't think he's on the call. And Danny, I think you literally just commented on it 30 minutes ago. Anything we should discuss here about it? So I think from my understanding, I think there's generally agreement to not allow this to grow unbounded in times of non-finality. So I think then one to reduce complexity and two to kind of know the load that's happening here. I think the question really becomes do you in unbounded in non-finalized period, if you're trying to reorg to something that you don't know is available of more than an 18-day or whatever the prune depth is, do you say is data available is true or do you say is data available is false? I think there's certainly some edge case attack scenarios to kind of consider here. Also some UX around if your client was offline and what the recovery modes look like. So it's kind of some trade-offs I think still to discuss. Happy to discuss a bit more here if people have questions about the state of the conversation, but also it seems like we're pretty active on the thread. Anyone have any other thoughts, comments on this? Okay. So yeah, we can just continue that. I think. And then Terrence, we had your issue also from last time, 3125. It seems like this is just ready to merge. Is that correct? Yep, ready to merge. Nice. Yeah, I think I'm good other than if you can just change the header comment to say what this does now just for posterity because it's a bit confusing if you read that. Yeah, yeah. Okay. And then the last one, I think it's another one that Brian asked for 3113. So, oh wait, is this the exact same thing? No, no, no, this is not the same thing as the previous one. And yeah, I'm sorry, I don't remember what we discussed about this last time. Is Sean or anyone who's been on this, yeah, give us a TLBR or? Oh, and Riko as well? Yeah. Oh yeah, both are here. Yeah, so the issue is about how we want to handle cases where we need to make a like get block and blob by route request for a missing parent block because when we do this, we don't know like at what slot the parent is. So that means we could be making a query for like before the 4844 epoch, for example. So we don't know whether to make a request for a block and blob or request for a block based on like the current spec. So the outline suggesting this might be a reason to uncouple the block and blob request because then you sort of have like just an optional sidecar in that scenario. And then the other solution we were considering was making the response to the block and blob by route request be in enum like a union type and as I see that's either a block or a block and blob. So yeah, it's the TLBR. I'm not sure where we're at with that. For us at Mythouse, we've sort of just kept the coupled request and not bothered to resolve this edge case for now. But yeah. Right, so from Chris's side, I don't think it's that hard to resolve the to basically resolve this case because you can just call the block and blob and check the error code. If the error code is stayed something like unavailable, you can try the block by route after just like a fallback case. I mean, it's pretty ugly, but I think like it's okay to work around. Yeah, this is also something that will fade into oblivion once we're finalized and kind of firmly pass this range, right? Well, it depends by, I think the error code, there is an error code specific for this situation there, but at some point you will not get any of this equation anymore. But if you got, yeah, I don't have the details of the error code, but if you can exactly catch this situation by error code and just retry in this very edge case that go away, it's okay to me. I mean, one thing is when you refer to a union, you're referring to SZ union? Yeah, right. I would want to avoid something kind of dirty in the spec for what ends up being like handling the transition cases here if possible. Yeah, so an error code would also be fine. I think we can resolve this in a few different ways. It's more about just coming to consensus on a solution. Can we move that to the thread? Or is that something that people want to discuss more or not? Yeah, I think so, okay. And the app line can jump in there. Okay, and we can, yeah, we can come back to it next week, call it if it's still being discussed. And there was one more CL spec PR that I had missed before, but shall we point it out? 3145, which updates the max blobs per block to four, which I guess matches what's on the EL side. Any comments or thoughts on that? I guess the question with that is like, should we target this for the DevNet 3 or should we follow up after the DevNet 3? It's just a simple constant change, right? So why not include it? Yeah, I don't have a preference. Yeah, that's fine with me. People are probably targeting master or whatever on the EIP, right? For configuration values? Yes, they are. Yeah, so we should align it. Yeah, okay. And I'll make sure to add it to the HACMD just in case it doesn't get merged like today, so we can at least know that it's there. Okay, anything else on the specs themselves? One thing is that Ramana, I think, merged the PR that makes field elements per block configurable so that we can have minimal presets also on CKCG, since that was causing problems for some testing situations. I'm not sure if it has been used or the client snowed. I guess the bindings also need to be updated, but this is something that is client relevant and happened this week. Yeah, we were waiting for this to be merged and we are ready for the binding to use it. Can you link the PR? Yeah, I'll find it and link it. Anything else on the specs? Okay, the next step, DevNet 3. Yeah, I'm curious where the client teams at and how are we feeling about getting this up? I know in the last week we were talking about potentially getting a single EL-CL combo. I'm not sure we quite got there. Yeah, does anyone want to give a quick update from their client side? Yeah, I can give a quick update. So we're passing for a 4.4 SPAT test as of last week. So thank you, Shaowei and all the people that's working on the SPAT test. I'm working on sync. That's close to done. One thing I like to finish before finding the DevNet 3 is that I do want to do some sort of local interrob test and I'm targeting Roboto's branch for that. So thank you for that as well. I haven't tried. I believe there is time-based or slot-based fork now. So that should be compatible as well as what we're doing with the Capella as well. I guess one thing I do need is that last time I checked, so the NG API for 4.4 is still using V2. So I do need those to be V3 to try it. And I'm wondering if there's a status for that. Oh, Roboto just said that he just added it. So yeah, sounds good. Thank you. I will try it today. That comment was actually with respect to time-based forks. But V3 APIs were also added last night. Thank you, Mofi, for sending a PR for that. And that's now merged. Okay, sounds good. So no more block here on my end. I will try local interrob today. And I will give you guys an update. Yeah. Also share your configuration for CL side, please. It would be interesting to test with you, too, as in your mind. Okay, sounds good. I will prepare an interrob dog for this. Nice. So for Lighthouse, we're in a similar boat where we're now just trying to test locally against the latest geth updates. We can test against Nethermine, too. And last week, I'd say the major outstanding work was in sync, but we've made a lot of progress there. So now we have an implementation, but it's untested. So after we get Lighthouse execution layer interop working, we'll probably start trying to make a local Lighthouse network working and see what sync looks like. And then we'll try to hopefully work with Prism or Lodestar to see if we can get sync working there. That's it for us. Nice. We start, sorry, apologies. Yeah, it's too jerry, I like to say. We started working on it more, more, it's only this week. So I'm now working on the SSZ serialization and more people are joining me working on it this week. So let's see what the updates will be next week. Sweet. Ah, I like to say. Yeah, just want to remind RLP side has some tricks there, too. I mean, you will need it for a transaction hash like that. And it's quite tricky. Never mind. So we tried to synchronize with get to run get at least it looks fine. And we need to see a set line to make a network. We will try to run session network next couple of days and hope for we will synchronize soon. And we're working on benchmarks for pre-compile tool. This is the other client team. Hey, this is Andrew from EthereumJS. I think as I noted last week, we're joining late. So we're still behind. I have over the last week, I've kind of honestly gave up trying to get EthereumJS to cooperate with Prism. Not sure why it's not working on the version on the interop. But I'm not that experienced with operating with the CL client. So I've been working with load star just because I'm more familiar with that one. And we have got it up in sync. We can sync past the starting block and using the current kind of three, you know, the block, the starting block based hard fork switches. So we're got that working. We're just I'm working through basically finding all the bugs that I made or it wrote in my initial implementation of 4844. So still working through at this point is kind of getting the block, the blobs to actually get transmitted to load star so it can validate. Then we haven't actually successfully transmitted a blob from EL to CL yet. But that's currently where I'm at. So slow, sometimes the steady progress on that. And we are hoping to also implement kind of related the time stamp based hard fork management within our client over the next week or two. I'm working with good gender who kind of does work with us and also with load star. So hopefully we'll have some of those other building blocks in place for when we're ready to join the DevNet. So hopefully by the end of the year, but I'll just see how much progress we make. Nice. Thank you. I continue chipping away at the Aragon client. Still has a bit of a waste to go. I didn't have a lot of time last week to spend on it, but coming along. Any other ones? I think we covered most of the ones that had said they're going to be part of the three. Okay. I guess then for the next week, does it make sense? So it seems like a lot of the clients are just trying to get things up and running and fix some issues on their own. Is there a client pair that we think might be still more ready to start on the DevNet so that others can try to pair with that when when their implementations are done? I mean, after we do some local testing, it might work. So maybe Lighthouse. I think it sounds like probably the same for Prism too. Okay. So let's try to get Lighthouse or Prism up and running with I guess, potentially as an EL. And yeah, if we can get those two, that'll be good start. Sweet. Okay. Then the next thing I wanted to cover real quick is last week, we covered Martin's benchmarks for Geth. And basically it seemed like the pre-compiles were maybe a bit underpriced. I know there's been some work done in the past week looking at that in more detail. And potentially the benchmarks were like a little bit pessimistic. So I guess I was curious to hear from people who've looked a bit more into the benchmarks. I don't know if Kev is on a call. You know, what your data is thinking is there and then how we should approach doing this like generally maybe across more clients or to make sure that we get the right pricing for the pre-compile. Hello. Yeah. So I was looking into switching out the GoKZG for a more native library and it reduced the allocations by around 80%. So it seems like GoKZG might not be as optimal. Even after switching it out, I was still getting some fluctuations, but I don't know if Martin's on the call. I think this was from the GC. So if you test it with CKZG, for example, you'll probably get more consistent results. And so I think that was the main problem. But on the same order, right? In terms of timing, it'd call it 50 instead of 67 or something, but it's not changing the order magnitude, right? Yeah, it's not going to immediately do a 2X, but it might be the difference between what Martin was saying with 67K gas and 50K gas. Right, okay. I just ran it on my computer. I did an EC recover. I think there's one more optimization to add. The EC recover was 42.2 M gas per second, and the pre-compile, I got it at around 17.9. But there's optimization in Konroc that needs to be applied. So I think it can get closer, but I need to just re-benchmark it. And then the fail case, there was an issue there, and that actually the fail case should be the same as the succeed case. Right, I think this was because CKZG was basically doing all these allocations and the GC was kept kicking in. Right now, when I'm only testing against the fail cases, they're roughly the same. There are sometimes when the GC kicks in and then it goes to like 15 M gas per second, but I don't know how to sort of solve that because you can't control when the GC kicks in. Okay, but we're probably more in the, even 100K is probably very pessimistic, and the 60K would be if we end up going with not fully optimized go KZG, but in that 50 to 60 range is probably very realistic. I think with go KZG it's kicking more towards at least 60. I don't know whether the allocations are quite a lot that it's doing. We haven't tested with CKZG through go. I'm using the Konroc sort of bindings instead, which is where I'm getting closer to 50, like 50 to 60. There's some low hanging fruit to optimize go KZG. Yeah, I think because all we're all benchmarking is the pre-compile, which is pretty simple. Like it's not anything to do with the aggregation. So yeah, if there's low hanging fruit, it's not going to be on the go, it's going to be sort of on the BLS side because you're just deserializing points and skaters and then just doing a pairing. So yeah, once this last optimization goes through, then I'd like to benchmark it again and see if it goes to 20, which would be closer to the 50 that we talked about. So then Tim, you wanted to consider how we need to play this in relation to other clients and languages as well. Are other clients and languages utilizing, going to be utilizing the native CKZG? Because in that case, I think a lot of this can and should translate, but if they're not, then maybe a more benchmark should be done. Yeah, but I guess my question is what happens if there's a discrepancy between the GIF code and the other clients? In terms of the easy recovery comparison? Yeah, I think it's worth at least knowing what it is, right? If, because either A, the clients, if there's a large gap in one client, that client can probably try and improve their implementation. If GIF is significantly quicker than all the other clients for some weird reason, then it might make sense to use something more conservative in terms of pricing. So that's, but yeah, this is kind of why I, yeah, knowing that they're all within the same ballpark would be useful. But if they all use basically the same library, I guess, yeah, the thing that's unclear to me is like what overhead do other clients have from like the bindings to their specific language and how big is that relative to, yeah, the, the overall execution time. Yeah, I guess, I feel like we know generally what the cost of pairing can and should be. And if overheads are more than 2x that, then I think that's like a sign for optimization rather than changing the price. But nonetheless, it'd be good to know that, so that the optimization is going to occur regardless. Yeah. And I think, never mind last week, you were kind of the other team that was sort of ready to look into this. Is that right, Alexei? Yeah, we're going to wait. Okay, nice. And we have Yasek on the call. I hope I'm saying your name correctly, who, who can probably help look into this. So, Yasek, are you in the KZ, I assume you're not in the KZG chat on Telegram. I'm not, no. Okay. But yeah, maybe I'll add you to that. If you want to just send me your Telegram handle, I'll add you to that. And then, yeah, it probably makes sense to just get started on, on another mind and see did the numbers roughly line up to what we saw it would get. Sure. Thank you. Okay. Anything else on just testing benchmarking? If not, the last quick thing I just wanted to cover is when do we want to have these calls in the next few weeks? So, I think it makes sense for us to have it next week, and then you're sort of moving into the holidays. The people want next week to be like our last call this year. Do we want to do one more after that? Yeah. How do people feel about that? Roberto is around for both weeks. Okay. So, we'll do next week. I was going to say I'll be around for both, too. So. Okay. So, let's do that then. Let's do the 13th and the 20th. Then we can take at least the 27th off and decide if we want to, yeah, it might make sense to do the third as well. So, like, if some people are around then we can do that. Yeah. Next two are okay. Okay. Awesome. So, let's do the next two. Take the 27th off and I'll be back on the third. So, if people show up, we can have that then. Yeah. Go from there. I have one more quick point. How to handle, if people are giving consideration how to handle fork identifier and the time-based forks, essentially an EIP 2124 extension or modification. I think naively looking at this, if we no longer would do forks by block number and only timestamp, timestamp strictly much larger than our latest block number. And thus I think you can kind of layer an extension on here where you use the UN64 fork next as a timestamp instead of a block number. But I think we would just need to get one, that's my very cursory look at this. But if somebody has some other ideas and then two, I think we just need to agree. That's probably something that we want to, certainly by the end of January, be agreed upon. It's minor, but would be annoying if it was being a blocker. Yeah, I agree. And generally, we're going to need this for Shanghai, regardless. Right. Yeah. I can knock on a couple doors of people that maybe the co-authors of this and see if they have a quick ideas. Yeah. And I know on AlcordEVS a few weeks ago, I think the teams are saying, we want to get some prototype implementation that we're kind of happy with and then write like a new EIP to specify it. But it'd be good to just follow up on where that's at. Yeah. Make sure we do have something in the next month or so. Anything else? Well, thanks, everyone. See you all next week. Thank you. Bye, everyone. Bye. Thanks.