 Okay, so let's get this started. This is what now the sixth of these 4844 calls just posted the agenda in the chat. As always, a bunch of spec updates. Then we were supposed to be launching DevNet three tomorrow so it makes sense to chat about that where things are out there. And then we have two other updates or at least one update on the, some benchmarks for the pre-compile and we can also talk about the large block spam test as well and see how things are going there. And I don't know if Xiaowei is on the call but right before she posted, oh yes, you're here, Xiaowei. You wanna just give a minute to talk about the new test vectors on the CL? Yes, so yeah, thanks for sharing it. So Terrence found the issue in the test vectors that we released last week. So I caught a new release today here, yes. And I hope that if any of you have tested and just let me know and if there's any new issues and please ping me, thank you. Awesome, thanks. Okay, next up, we had two specs issues open on the CL side that we didn't have much progress on last time and I just want to follow up on them. The first was Terrence, one of your issues about adding blob availability checks for ancestors. What is the status there? Yeah, so hey everyone, I have a corresponding PR to the issue. So the PR was approved by Danny but there was more feedback regarding that we do not want to remove that is data availability because it's nice when you go to Dencharding, you still have that notion. So I made a minor update to that and the PR is ready again. So yes, feel free to take a look and further comments is welcome. So yeah. Okay, and that's PR 3125, right, the I am? Yeah. Okay, nice. And I just linked it in the issue so if anyone was following just the issue and weren't aware, they can go there. Great, and then George, you had some updates on the crypto side of things. So you 3093 was merged and now there's another PR 3138. Do you wanna give a quick update on those two? Yeah, sure. So okay, for a quick update, basically we settled down on how we want to handle empty blobs, empty sidecars basically. And now like the correct behavior is on the spec that settled. So basically now client devs can basically pass empty sidecars to the crypto library and it will handle it gracefully. So that's done. And the future things on the cryptography side is another thing raised by client devs about when we use the pre-compiled and we are given a scalar field elements as bytes whether we validate them or not. Before we were not validating them and we were just kind of using them as they were given to us. But it seems like the more correct approach is to actually like validate them and error out if they're not canonical. So this new PR 3138 basically introduces the validity condition that should have been there from the beginning. Yeah, that's also isolated on the cryptography side. So that's one nice thing from the API we have designed that kind of all these things are nicely abstracted away on the cryptography plan. And it doesn't give much clutter to the rest of the client dev workflow. And another thing that's happening, I'm not sure, Kev, are you here? Let me scan the... Yeah, I'm here. Right, okay. You want to give an update on the pre-compiled gas cost or where we are right now because I'm also not sure where we are right now. Yeah, and Martin is here as well. He ran the benchmarks originally. We can maybe do that after. So we already had a note for that. Yeah, so we can come back to the... Sounds good. One question by Infi. What is it? Does 3138 resolve the canonical encoding issue you highlighted a while back? That's a question to Kev. Yeah, yeah, it causes the issue. Okay, so I guess that's it from the cryptography side. Sweet. Any other updates or concerns about the spec? And then if not, we can cover the benchmarking of the pre-compile, but anything else just at the design level or rate to the spec? We had a concern on Lighthouse's side. So for the interop depot, we have like a specific preset that has been generated where the slots per epoch is set to three. So like, we were wondering why that is because it's a bit annoying to handle it on Lighthouse because we have to define another preset which is only valid for the purpose of the interop spec testing. So I was wondering like if it's not that big of an issue, we can just like change it to mainnet spec. I'm not really sure why it exists. Yeah, does anyone know why it was set to three? Are you talking about the minimal specs for 4844? No, like this, like in the interop EIP 4844 interop repo, we assume that that's what's going to be used for the devnet eventually. There's a different preset defined which has like the slots per epoch set to three, which is like not mainnet and it's not minimal either. So it's a bit hard to handle this specific case only for the testnet. I was just wondering why it exists. Okay, just for... Yeah, so Mofi answered, he just said it was easier to debug and I guess it makes sense with the three epochs. It's just quicker. Yeah, does anyone have an issue? Does anyone have an issue moving it to 32? Okay, it seems like we want to move it to mainnet. Mofi, is that something you can do? Okay, awesome. So Mofi will make that change. Anything else on the spec? I see that there is no lion here, correct me if I'm wrong, and just want to highlight one issue that he opened and he's highlighting a couple of edge cases that to me sounds reasonable to discuss. Yeah, so that's 3113 that you just posted? Yes, exactly. So if anyone has some room to have a look to that and maybe start discussing in there, it would be nice. Has anyone already reviewed this? So we're like looking at it in Lighthouse and like these are the types of edge cases we're still trying to figure out how to handle. So we're aware of it and we'll comment on it soon, but don't we have ideas yet? Yeah, same with Prism that we are actually studying this issue. It's a tricky one, but yeah, we will review very soon. Does it make sense to chat about this on the CL call in two days, or should we basically come back to it next week on this call? I think if we have more traction on it, it'd be worth it on the CL call, but I'm not sure. So I'd say no for now and then wait for it next week, but yeah. Okay, sounds good. Yeah, so if people want to have a look at that and share some thoughts on it. Yeah, we'll keep an eye on it. Any other issues or spec related concerns? Okay, I guess then the next one that's kind of also part of the spec. Martin, you ran sort of an initial set of benchmarks for the pre-compile on the get the branch. Do you want to take a minute to kind of walk through what you did? Yes, sure. So what I did is basically a rerun of what's been done a few times before when we added new pre-compiles, the Blake and the BN add and the BN model and those a long time ago. That was at the time when the implementations were get and CPP ethereum and parity. So kind of the scripts that I used to take the raw data and transform it into columns. There are three formats for that one for get from the other one for CPP ethereum one for parity. So, right now, I only have data for get, which I ran into different machines. And both of those kind of indicate that the proposed gas cost it's not it's not far off, but probably it would be good to bump it by a factor of 1.5 or maybe two. And that's something I mean it doesn't need to be set in stone now because it's very simple constant change. And that's also pretty nice with this pre-compile that there is only one flat cost. It's always I mean it's more difficult. Last time we priced something it was where you have a formula where the pricing depends on the complexity and or the length of the input. And of course that makes it another dimension of difficult. Yeah, so now we've got some some preliminary results. It would be nice to have the same kind of executions done on the other EL clients, never mind and be so. I suppose that the error gone is mostly on par with get. And at least we have a sense of whether they're on on the same level as gas or someone something is dangerous as well. And the methodology used is to compare the new pre-compile and gas per second wise with the other pre-compiles and the reference one we've used has been easy to recover. And yeah, that's about it. I guess I don't have much more else to say. I know Kev you had some thoughts on discord you want to quickly give an update. Yeah, I guess it wasn't entirely intuitive that the failure cases were taking longer or were more expensive than the correct cases. I've managed to reproduce it on my computer so I'm just investigating why that's the issue. Yeah, and I kind of assume that something that can be fixed. But that's my instinct. Yeah. And then I guess for the other EL teams. How easy is it for you all to reproduce this. Is this, I assume people might not have had a chance to look at I think, I think, I mean, I know that never mind. Both never mind that these have done benchmarks and pre-compiles before that we've done some comparisons on. Okay. Okay, so I can reach out because I don't think there's anyone from base you on the call. And then I think Alexa maybe from all Jerry sorry. Yeah. We just starting to actually do many things so very almost nothing's done. Great. Okay, so there's not much to benchmark for enough. And then on the never mind side, Alexa, I see you're here. How do you have like their bandwidth or how easy is it for you to benchmark the 4844 pre-compile relative to other pre-compile in terms of pricing and then there's a script. The last comment I had in the chat is Martin's results for doing this. So we have quite basic tests for that pre-compile on dot net side and we did not run any benchmarks actually from inter or from any other place yet. Okay, got it. Do you think the pre-compile itself would be in a spot where like is the implementation far enough that it makes sense for you to benchmark it or are you still working through it and even if you benchmark that now it won't be a good benchmark because there's still stuff to optimize. Benchmarks are always good and we can compare probably implementations as a future right. Because we do have, we do have people who like might be not in fine teams who might be able to help with the stuff so I'll reach out after the call and check with them if they could help with at least the never mind one in parallel. So if this way, if you're all still working on the implementations you can get the benchmark set in parallel. Okay, so I'll follow up on that. And then Bruno has a comment about the pricing being more of an issue for ZK rollups and optimistic rollups. I don't think there's anyone here working on ZK rollups, correct? Okay, I can follow up offline as well and paint some ZK teams and get them to share any thoughts they have on their pricing. Anything else on the pre-compiles? Okay, then next is the DevNet. So DevNet 3 was supposed to be launched tomorrow from skimming the chat. It doesn't quite seem like we're ready. So maybe it'd be good to just get an update from the different client teams about where they're at generally and yeah, what the next steps are for them. Yeah, thank you. I was going to say Tim, I think I agree that we're not nearly ready for Wednesday. Things are coming along quite well though. So that would get the individual teams. Yeah, I can give a quick update on the prism side. So thanks Xiaowei again for fixing the SPAT test. So right now we have been testing the SPAT test as of this morning. We've gone on two to three test failures and the same side those test failures are on our end. So yes, I think we just want to fix those as soon as possible. And then after those we should be ready to begin interupt but no guarantee that it works right out of the gate. I kind of imagine some trial and error. But yeah, that's where we are today. Any other teams from prison? So for White House, we still have things to iron out with sync. I think otherwise we're there with an implementation. So we've been working a lot more towards trying to join the interoper repo and making solid progress but now we sort of need an execution client to test with. And then also like Puan mentioned earlier, if we can make the spec use 12 or sorry, like the main net slots per epoch 32, that'd be helpful. And I think part of the reason it was set low was for the tests in that repo to run faster. And I'd like to have us run the tests. So I was wondering, is it possible that we can just like start the test net from a later epoch so we don't have to like wait a ton of epochs to run the tests. Sorry, what was the question? So, if I understand the suggestion is to use the main net 32. Right, so why don't we use and assume the minimal preset is used on like existing test nets or something. Well, so the issue with the minimal spec is right now it has incorrect fields per BLS elements I think. Like, that value is hard coded in the CKZG library. So, yeah, I don't think that KZG library works with the minimal spec at present. But if we update that in the spec. At that point I think we could change the tests in the interoper repo to use minimal. Sean did you say update the hard coded value in the spec. Yeah, I think the field is like it's in the minimal preset the fields per BLS element. It's lower in minimal versus main main net. And that value is hard coded to the main net value in the CKZG library. So fields per block, right. Yeah, yeah, I think so. It's like 4096. Yeah. Yeah, this is something just to say that I want to talk to Ramana today so that we make it compile time configurable he was amenable to it last time we talked so I think we can now move on with that. Okay, so if that's the case then like whenever that's configurable, we can transition the interoper repo tests to use the minimal aspect so I'd be reasonable. So I don't know much about these presets but why not just change the minimal one to use the 4096 value. We do have that right or no we were. We didn't want to change that right so I shall I. Hi. Hello so so the minimal preset and config are for the specters to provide the minimal test vectors with less low cost for the client teams to run in the CI daily or weekly. So we also provide the many test vectors at the same time. So, but for the spec side, since we are using a pie CC the Python implementation and our. Our KZG implementation are actually from the spake lines so I that is incredibly slow in compared to the C implementation so for the spake itself. I think minimal config in minimal preset with like build elements for black with like four or eight is needed for the specs but for the definite you can free to generate to use any numbers in the configurations or in the preset if I understand correctly. Okay, so we don't want to change the minimal preset because it'll be too burdensome on certain testing, but sounds like there's not another existing preset. Yeah, I think you can define your the definite only preset. If I understand correctly that we might have used it in the previous short term definite before. John is that an issue just to define a definite specific with 4096. So we we actually sort of have that right now because we were trying to get the test passing. So that's not too big an issue. Yeah, I guess, if we update the slots for epoch at least to match like the minimal spec. So if we just update that to eight. So we'll just fill the elements high until it's configurable. That seems pretty reasonable. So, the definite has the minimal configuration do I understand that correctly. So what's there now is already a sort of custom configuration where it's main net apart from slots per epoch is three. But main net is 4096 right so you wouldn't have a problem CK is a junior. But the question would be if we just wanted to switch to a preset that exists, it would be minimal. And that one wouldn't work. Yeah. And why would it be minimal maybe you already said that but I didn't catch it. It's to have this lots per epoch be lowered so that the tests run reasonably quickly, because we have to fork through multiple epochs. Okay, got it. Okay. Yeah, see and if I understand, are you able to run with the interop now with the current slots per epoch of three. No, we got like, we're able to run up until the withdrawals work. And then at that point, we don't really have anything to test against so then it breaks. But we have like, like pre like up to Bellatrix working. And the issue is this three slots per epoch. No, we have that working. It would be preferable not to have that though, because we had to like have a, like one of the assumptions we make is, for example, like his slots per historical route, being divisible by slots for epoch, which that doesn't work when slots per epoch is three. So we have an AI to change that to 32 so let's let's assume that's going to happen what what else to do. If it's changed to 32, then we can just use the main net spec and that's great. And, and, and Mofi you, you, you did most of the interop config stuff as I understand it is other than the tests maybe running slower is there a concern with that you may have lost Mofi. Mofi says it should be fine in the chat. Complains we can add more machines okay so tentatively let's set it to 32. I'll try to make that change today. Awesome. Thank you, Roberto. Yeah, that's it from my house we're sort of just like trudging through interop and then trying to finish up sync. Awesome. Kind of the same issues of another man. In other mind development we, we want to, to have someone with timestamp based for us for Shanghai and for withdrawals and for the IP for eight or four to test with so we are a bit blocked by that. Just to like form the structure of the campus with our notes we made a pull request, but we still don't know what genesis will be relevant for the next three with timestamp based books and we are continue our testing with available tests. We hope to join the network right after. Or a few days after some other execution client and consensus client like prison and get will provide us will provide the community with time based, timestamp based books. Those are stages. All right, I'm not sure we have anyone working on switching to time based forks right now at least in a guest, prism, or or Terence are you guys doing time based presets and prism and based books. So from the sales perspective, it doesn't really matter. It's the changes will mostly come from the outside. So what we have been doing is that we have been testing local interrupt and there's a local interrupt with withdraw and there's also withdraw test net. And then they, and then I believe that is based on the timestamp for him mechanism. And then we have been using the like client branch. So, so like client that has a branch for that. So let me post a branch here. So maybe we can, you guys can look at how that branch is done and because because that is the branch we're using for testing withdraw based on timestamp. So does it include a pre for foreign with rules both. No, it doesn't include 444 so someone has to basically build on top of that. It's just a special case. Nevermind special case we did not plan to have block number based forks for these two cases, some kind of sharding forks. We'll get some other teams implementations with them based books. If I understand right time same case forks are the preferred going forward fork mechanism so it makes sense to switch to that if it's not too much trouble. Yeah, we're going to have to do that anyways. Because withdrawals have to be timestamped. And because the, the, the, the 444 fork, even if it went live, you know, effectively at the same time it's still activates after, and after the merge so I think it there's no world where like, we do withdrawals with timestamps and then we come back to blocks for 444. So let's queue that up as an action item we get. Yeah, stay based forks and get within interop. We have a volunteer for that if not all I'll try and get to it. I have a, I have commit somewhere that it's just like one, one commit that you need to pull in. It's, it should be really easy. Oh, fantastic. Nice. So people of starting with a merge network. As of now. Yeah, hey lion so we just merged it, we just merged the feature, like last week so yes, as of today we are capable. Can we update the interop repo for the IP for it to start at Capella. Or Ben at the same time. Yeah, we can do that. Awesome. Thank you guys. So, and to begin, so yeah, we want the interop repo to start post merge basically. Yes. Okay, okay. And so I guess I can add those two changes to the devnet three specs so to clarify we're using timestamps for fork, and then would we want devnet three to also start at Capella basically. I'm not a post. Also, also not a policy. Okay, good shot. I'm just going to say, I'm also not opposed to anyone. Are there any drawbacks to doing that other than just perhaps not exercising more of the fork logic. I guess it doesn't matter really if we're going to go out post withdrawals anyway. Right we could start at Bellatrix as well right like you could start at the merge go through the power fork and then does anyone have a preference either side. I think if we're going to do not like a phase zero start a couple of start would be better just we don't have to deal with the like terminal told the total difficulty logic and whatnot. I mean we can we can also do Bellatrix epochs zero TTT zero and then Kapella and AP for it for the same epoch. That's a good question. Yeah I don't know maybe this requires a bit more thought. But so for sure we want to do timestamps because that's what we're doing everywhere else anyways. And then, whether we start at Bellatrix or Kapella tbd, but clearly not starting from case zero. Does that make sense. Yep. Yeah. Okay. Okay, I'll add those two notes to the definite three spec up to the cough. Okay. Anything else on the definite we sort of went through some client updates and technical issues but any other client want to share updates or any other issue. Yeah, this is Andrew from Ethereum jazz I'll just just next I think I noted in the last call was able to and that we were, you know, we're going to be very, very late to devnet three, if at all at this point. I have a mostly working local fork of the interoperate but now so I can at least get our client up and running as I've been able to get partially working with Lloyd stars having a lot of trouble getting it to interact with prism correctly for some reasons there I'm going to start working on that again today. But I do have I think most of the spec implemented at this point on the EL side so have the the new engine APIs and we have a reason this the CKZG library and it at least works in local testing not not about interop testing yet. But that's my goal for this week is to try and get interop to actually kind of cross the sharding block. So hopefully at some point maybe in the next week or two we'll be able to actually start really running the tests in interop repo and then possibly join devnet three at some point before Christmas or whatever the current devnet is at that point. I'm very nice so yes. What's what's your. Yeah, we can add you to the table and that the three thing is also you can track the progress there. Yeah, I think realistically given all the issues, you know, it better been popping up, you know, and next week is probably may even be optimistic but I think we should try and shoot for end of next week as a new deadline. Okay, let the question so prism and get are able to run the fully IP of for it for for logic today. Um, we both fee and coin based one mostly on it I believe they have like, they, so we don't have that to, but it that's not the latest back so I wouldn't say we're fully functional as of today based on definite three, there's still more work. Um, I got a prison repo in interop. It should be abiding mostly by the new spec. I think we still are waiting for kev to submit the zero blobs tweak. But that's under review I imagine that's going to happen. You know probably within a few days. So my point is during early alter days. So the suit on from the cool. Essentially started definite really quickly and really fast on whatever was the latest spec and the definite was only take a and running with four nodes, but it was incredibly productive to have that test that as a target to testing this logic quickly. When you are developing. If it is available, it will be extremely helpful if you guys can quickly spin, as long as that's somewhat working. A tiny desk net that we developers can use that will be very appreciated. And that could be done in parallel to that. If he's ready for. And I guess the prism thing, the biggest concern is that it does not re based on Capella as for the chat so there needs to be some work done on the prism branch as well. So let's say what's the quickest ELCL combo we think we can get. We re based on Capella using timestamp forks and kind of running so other teams can, can try to interrupt with. Yeah, so it sounds like the issue right now is a CL client that is re based on can bella. Yeah. So we have this and like we might be able to get it working. If we have like a local test that where we won't miss blocks because for us the big meat missing piece is sync. It's like we have sync parts implemented but it could get messy if we're missing box. So if you have a test network or dev net where lighthouse, you know there's only lighthouse validators perhaps the start then yeah like maybe like a couple lighthouse nodes with just lighthouse validators my work. Okay, local to each other. Yeah. Could we use kurtosis on this one because all it does is as long as it is just a beacon node a validator and an EL, then you can set up a local test and put that quite easily. Yeah, I mean I could build a document with what we got. Yeah, I think that work. But I can give you the I'm the file to learn that then. Cool. So you understand, you cannot sync or you don't do not expose for others to sync. We think we'll panic if we get a blocks by range request right now for sign blocks and blobs. And then we'll sync because like we're able, like we have gossip implemented. So like we'll just import blocks as we see them, but it's if we miss a block and have to request a block. I don't think that'll work for us. So yeah, got it and blocks by route. Yeah, I don't think we think no because we're trying to figure out how to handle the edge cases you pointed out where like, what do you request pre and post fork, as well as before and after the prune boundary. So I don't. Yeah, blocks by route I don't think we serve either. Okay, so I guess. Let's, it seems like either lighthouse or prism are probably the two first that are going to be ready on the CL side that's trying to get that and get running. And on the dev on the interoperable hopefully you know it doesn't lead to lighthouse missing blocks and having to sync. Roberto you were saying before, and of next week would be like an aggressive but nice target for the dev net. Yeah. The people think, basically may say before our core devs. Next week is realistic that's about 10 days. Yeah, it's really up to the CL devs I think at this point, I think death is going to be ready. You know, I think I'll be able to implement the ai's that have come up around making the epic changes in the time based fork. But the CL stuff's a little outside my control. I don't have a good handle on that. Got it. The loadstuff should be ready, logic wise and it's released. But we have never attempted to run like the full the full thing. So get is ready today. Get will be ready soon I think yeah. A few more things to be done on the execution API and working on that right now. But other than that, I think it's in a zero blob stuff, for example, things to be integrated all that's ready that would just need to pull it all together. Got it so on on your repo on in this repo now the CI passes for prison and get. So if that is the case, what is the difference between those tests and actually what we want to do in the definite. We've done the execution API work for withdrawals. Those tests may no longer pass. I'm not sure. Moffi might be able to better comment on that. So I think that's where we start getting interoperability issues with between the interoperable prism and where the get this right now. Yeah, yeah, and then Moffi is saying that they did CL. So yes, it's passing now with an older version of our geth repo. We're not doing the entire expectations of the capillary base. That will break very soon. And we'll need that updated CL at that point. Okay, so let's try I mean yeah I tried that then anyways we'll have the call next week. A couple days before to review where we're at. I agree. Even if it's not all the clients, you know, even if we get like three or four out of six running, the other ones can come after and having the whole set before the holidays would be great. So we at least know this is this is working on a definite. Yeah, I think it's possible. I'll push it along best I can. Sounds good. Okay, last big thing that I wanted to make sure we cover is this big block test. I know Georgia's here still on the call. I think yeah, yes. Yeah, you want to give us an update there. Yeah, quick context so last week's progress was that we ran it on with 128 kilobyte transactions and it looked like nothing happened. So I wanted to do it with 520 256 1024. I'm connected the flashboots builder, but right now there's a cryptic RPC bug that I'm waiting to get support on. So that's the blocker but the code is written and once I'm unblocked on the, on the flashboots. So if you want to actually build it, it should be good might even be today. So I'll let people know, I think in the chat. Awesome. And Perry, I think you were saying that you've added. You've added support or are going to add support for me boost on it all if the def DevOps validators. I've already done that. So now something like 30 ish percent of colleague should be running my boost and we are getting a couple of consequent proposed blocks that are all coming from a boost. Nice. Very cool. Any questions. Maybe let's coordinate after this call on like maybe giving me like the RPC for the EF builder and the address so that we don't have a builder we're just relying on the flashboots one. Okay. But but the relay is plugged on the builder so we have more hashrate for inclusion of multiple blocks in a row. Yeah, right. Yeah, so we're at about a third now. Yeah, cool. No problem. Then that's even easier. Thank you. Anything else on that. Okay. And then the last thing, Henry, I think just had to leave, but he started working on a Nimbus prototype. So we should start seeing more on the Nimbus side in the next couple weeks. There's already an initial PR open. Yeah, anything else anyone wanted to cover. Okay, so it's the first time we've ended these not over time. So I guess that's a good side. Yeah, thanks everyone for joining and talk to you all on this y'all call in a few days. Thanks guys. Thanks everyone. Bye. Bye. Thanks.