 Me being like this meeting is being recorded. Okay, I think it's a good spot to kick off. Good morning, everyone. This is the eight for four for call posted the agenda in the chat here. As always, some spec updates, then that updates. And then we have some updates on the builder side as well today. So if you can cover those. To kick off. We have this issue from the last call. So we have an update on. That lines issue around old finalize data. What do we want to consider it available? There's been some movement on this in the past few days. I see Danny you're the last person here. I think who's commented. Do you want to give a quick update on where things are at? I think Mike actually commented. I read that. You're not. Right. Okay. I think there's like general directional agreement on what to do here, one would be first and foremost to not add the complexity of the unbound DA requirement. And then it's a matter of do you by default think that data is not available if you can't get it past that window if you need to invest on to reorgs or do you by default think that it's available. This is like kind of a security and UX trade off here. I think one of the big, I think people generally want to by default assume false, but then if you're past this window. If you don't do an additional trade off at that point you're like, kind of have some trouble catching up. So, yeah, I mean, I think from a security standpoint if you haven't been in sync for 2030 days, you actually should get a recent piece of information from the network because of the week subjectivity period. So like, that's not a terrible security trade off, but in, you know, most cases you could just sync the network from there and be safe and be fine and find the head so it's like, trade off although Micah Micah jumps in and says like, well this is exactly what it should be, because of the security issue with the subjectivity. So, I can circle back on that and make sure there's no final input between now and Thursday, and then try to make a micro final column Thursday. I think fortunately this is not blocking core development. This is like more on the edge case side and we decided certainly to go from not in the complex direction of the unbound window so Thursday reasonable unless there's comments right now, or we could try to hash some stuff out. I don't know if all the relevant parties that have been on this threader here, but yeah if anybody has some additional information or sees the trade off differently than I love to hear. Thanks for the update anyone here have some comments or thoughts. Okay, so yeah let's try and get this resolved by the CL call this week. Yeah, I'll ping in a public channel that pretty much were like, want final comments on this report Thursday. Sweet. The next one also also a daft-liad issue that we, I think there have been nothing since on the last call, like when we discussed this, and there hasn't been another update on it yet but basically the the blog, the blocks by root edge case. You know, yeah, basically Sean was the last one to comment two weeks ago. Not sure if there's any updates there or a way to move it forward given it's been kind of still. Um, I felt that line actually open a PR for this. It's 3154. Oh, you're 13 hours ago. Okay. Nice. I missed that. Okay, so I guess yeah then we can just review this PR I don't know if anyone has any comments on it already or given it's pretty new aspect most folks have had a chance to look into it. So, people can look at that async. And then the last bit on the spec shall we had some testing updates with regards to the trusted setup. I don't know. Shall we have the call. No. Oh, she. Sorry. Oh, yes. Yes. So, yeah, so the KDG ceremony will generate the G2 trusted setup with 65 elements. And previously we use the fall, like 1000 and 96 elements for the main preset and four elements for the minimal preset which is that it won't be matching to the standard setup. So now we use set it to 65 for the testing trusted setup. So I think it won't change the definite consensus but it seems it will be helpful for the KDG library to to use the 65 right now. So I think we just need some quick reviews and I think we can merge it in the spec release this week. Yeah, we discussed this with Duncan and George and other people on the library side, and that's definitely what they want and there's not really a big trade off and just keeping it to the size of the ceremony. I guess, if so, if we start the definite. We should use these new values. I guess, like, I guess, what's the impact of if we start definite three with the old size obviously I should be breaks everything to change changes value. I guess, is that a correct assumption. This is something you need to submit this. I thought this was just for the spec tests. Yes. Okay, okay, okay. Oh, got it, got it. Okay. Okay, sorry, my bad. Yeah, but then we want to use the production of a 4096. Right, right, right. Okay. I'm anything else on the specs. Every call the specs section is getting shorter, which is a very encouraging time. He's just been like 50 minutes just on this. Okay, I'm next up then definite three. I know over the past couple weeks we've been trying to get it started with the minimal set of clients. I don't think that's happened yet. So I guess I'm just curious to hear from the different implementation like where things are at. Are there any blockers and yeah we can we can go from there. I don't know if anyone wants, wants to start. Otherwise I can pull people off. I can give a bit on prism site so we're passing the dispatch has. I think I've implemented most of the sync stuff so I'm waiting to test it so I'll probably we just tested with multiple prism notes first to see how that works. I also posted like an interrupt instruction on how to set up prison and get so, but that's just one pair. So I'm trying to do multiple pairs right now. So yeah, if that goes well with if I can do a sync with multiple prism notes that I think for our end we are clear for that. Thanks. Hi, from light house. So we're on a similar boat as prism we managed to do local interrupt with multiple lighthouse notes and multiple get notes. And we, we were able to send blocks into this local question and it works but right now we are ironing out a few bucks and also we are testing sync as well locally and I think we should be in a good place to start local to start a definite pretty soon. It's like we're just ironing out. Thank you. Everyone from the load star side. We've been able to pass the alpha one hot fix to spec tests. We're able to run a load star local only multi node dev net with like a fake yell attached I can make blobs. So right now we're currently working on a geth interrupt in CI right now. But we're running into some issues with that the latest rebase geth branch is again breaking for that interrupt. And also the. Well the last commit that kind of worked for us with the geth interrupt was the one ending in zero Charlie Bravo seven. It runs successfully interrupt but without any of the block transactions. What else the block transactions generated generated using the C case of G library. It causes an invalid case of G commitment error and geth. Seemingly as it might be that the case of G libraries are not interoperable right now, but we were able to actually interrupt with the theory in JS. With blob transactions submitted between those. Do we have an example blob where we're seeing that difference I can, I can look into why go case easy is not accepting it. Yeah, I can, I can pass get ginger who's been testing this to pass on the information to Robert. Thank you. So where did you get the trusted setup, like are you maybe using different trusted setups I don't know what the current convention is there. Because currently they're all like test testing setups right so someone I don't know where they came from. They should have all been updated to use the consensus specs one, which I see. I did hear about interoperability problems as well last week, and it seems that some of the rappers are not updated. I tested go KZG last week and it's interoperable with a different library. I think it's a wrapper problem. But if you can send the blob as well to me it'd be great. I can check it out. Who's who's maintains the rappers is that Dan says still Dan. It's multiple people so for node JS it's done for never mind it's Alexi. Okay, so I guess we can loop them in if we get more suspicious of that. We, we, as a mind we just trust stuff different from that one from CK ZG. If you use the standard one it will fail. Well, but we can like sync on that. Never mind still is not syncing with I guess a prism network we have some bugs. But I hope we will fix everything during the next week. That's the stage. Thanks. Any other. Yeah, Andrew. Yeah, so for three years, as Phil mentioned, we, we have, I've been working with good gender. He's kind of, he actually was on both of our teams. He's been very helpful with getting load star. Interop working. We have at least on our setup and able to run each JS and load star together and sync and send blob transactions. Here again, we're, we do have a blocker with the CK ZG library. I mentioned this to Kev that the, the verify KZG proof method that's in the CK ZG library is not exposed by the wrapper. So I can't fully implement the pre-compile right now. So that's one blocker for us. I haven't implemented it as far as I can, but we can't, we can't do that last proof verification of the two points that get provided in the input. So I have shifted today to start kind of looking at trying to maybe do some more interop. We have the same, I think Phil mentioned, we have this verify KZG proof error when we try to send our, we try to generate blob transactions using our code and send it to Guth and it, it rejects them with this KZG error. So that's one. Here again, that's probably a, I'm assuming a KZG interop issue. And I'm hoping to try to start working and maybe see if I can get EthereumJS to hope maybe sync with Prism at some point this week. We're still, our PR to pull in a timestamp based hard forks is still in flight, hoping that your gender and I will get that merged in the next couple of days because that will hopefully allow us to start trying to go on the latest DevNet through spec. But otherwise we've made some good progress in the past week, just being able to generate blobs and at least sync with Loadstar. So that's, that's promising. Yeah, sorry, I was just going to ask. So Loadstar is using the MPM package as well. As far as I know, yeah, I think you'd have to confirm with, I mean, when I looked at it, it looked like the same one when I was doing my interop testing. Yeah, I believe that's correct. Okay, I'll ping Dan to update the MPM version, because I think that's basically the issue here that it's using an outdated version. And yeah, if that's not, if that's not the issue, then I'll just look into CKZG to see where the issues are. Vasu is definitely not ready for DevNet 3. So far what is done is parsing of the transactions from the network, the network payload. And it's basically, so this week hopefully we will have somebody working on a transaction pool and we will start doing the KZG proof integration with the CKZG today this week, but definitely not ready for DevNet 3. Yeah, similar update for Aragon. Progress, but not ready yet. And I think that's, everyone, is there another team that has another update? Yeah, hi. Yeah, sorry. Yeah, for Tech Cool, we are working on storage and we have things pending on the network and syncing. So our goal is to basically start implementing the spec tests, see if they pass, and hopefully some time in January we'll start joining DevNet 3. Sounds good. Any other team that I've missed? Yes, I'm sorry, I'm a gradient team. So we are not ready for the testing. However, we got a bit more numbers on the different Ellipticure backend libraries. I pasted the link in the chat. And yeah, the interesting thing is that it could be something wrong with our CKZG binding, but we've got a significant better performance on verification with Rust-ability implementation. So is there, I don't know, maybe somebody knows, is there benchmarks on the CKZG without any bindings? Do we have those? I think there are such benchmarks that you can run from CKZG without any bindings, yes. I don't know if they do the operations you wanted to do, but there is a benchmark tool. Okay, if anyone knows, please give a link for that. And another interesting outcome is that at least for the proof generation, the parallelism is working pretty good with a little efforts. So I think it's possible to get really good results with more efforts for proof generation for multiple blobs at once. So that's the results, thanks. Thank you. Eddie, other client updates? I think the only one we don't have an update from that right now. And Henry, I see you're on the call. Do you want to give a quick update that I thought there was like the initial PR for Nimbus? Yeah, sure. So Nimbus has basically started over the last few weeks. I've been working on it with help from the Nimbus team. Not going to be ready for DevNet in the, or for at least a few weeks, but basically the status is we've got some scaffolding in the core Nimbus repo, a bunch of basically most of the scaffolding I think is in place. I'm working on the CKZG bindings right now, as those of you on that telegram group have seen. And I think once that is done, which is basically today or latest tomorrow, I'll start wearing those calls into the, into the core validation gossip and take it from there. So yeah, it's, we've gotten a late start, but it's coming along. Very cool. Okay, so it seems like there are definitely a couple of teams like getting close to DevNet having like some single kind of combo interop running locally. In terms of like next steps, like, should we try and get a DevNet launched in the next week? Do people feel like we need a bit more time than that? Yeah, I guess I'm curious, you know, prism, load star, or prism lighthouse, you all seem like almost ready. Yeah, what do you feel is a good thing to aim for? I feel like as we get towards the holiday, there's probably like less people available. So like, if it was just us and my house launching a DevNet by ourselves, then doesn't really make much sense. So I don't know. I feel like I will prefer until early January, but yeah, I don't really feel strongly either way. I guess it's up to others. But Terence, if it's up and running, wouldn't that make it easier for other people to join in? I mean, right now it seems like testing is a little awkward. If the DevNet was running, then wouldn't that make it easier? Yeah, yeah. I know. Definitely. Yeah, that's a good point, right? Yeah. So yeah, I mean, I prefer if we can get it going next week with, you know, just a couple of clients and I think it would be worth it. Yeah, like from lighthouse, we agree. Like right now we can, like if we have some help from the DevOps team, like, I think we can, like, launch a DevNet maybe sometime early next week. Like there are some bugs that we are lining up, but I think that if we have something that everybody can sort of try to connect to from their own client combinations, then it's definitely helpful. Even if there are bugs, even if the DevNet has bugs, like it's still be helpful. Okay. So I guess, yeah, let's try and get it running. And then the people think it's possible to get this done before the next call, like before Tuesday, or are we going to need more time than that? I'd say let's shoot for it. I mean, it's not, it's not a certain thing, but it feels like we're really close. Yeah, I agree. I think it would be good to like, if we can leave for the holidays and have the DevNet running, like in the background and even if it's not everyone on it, like it's just good for anyone who wants to test or it's not that it's, yeah, it's there. Exactly. Okay. So let's try and, let's try and get at least a minimal question on that. I guess like, just to make it a little bit more explicit, what's our sense of what the path is to get that DevNet up and running? Like it sounds like prism, light heart, lighthouse, and death are probably the closest with maybe not their mind and load star, Ethereum, JS. I mean like, do we want to start with two? Do we want to start with three or four? How are people feeling about that? So prism, death has been our previous combination for DevNets. And we've only had that in the past. That seems like the reasonable pair to get started now. It would be nice to have another mind lighthouse also, if we can get those in, but I'd be fine setting it up with just a single pair. Just to again, help the others get testing. I think maybe, Oh, please. Sorry. Yeah. I do think lighthouse is slightly more of a head of us at this point because they do test the multi nodes in terms of syncing. We haven't tested that. So yeah, feel free to replace prism with lighthouse to star as well. I guess it doesn't really, that much difference either one of them. But yeah, just that I do want to point out the fact that we haven't tested multi nodes syncing, which I probably can test in the next few days. Yeah. I think that's fine with us. We are like I said earlier, we're ironing out the sync edge cases, but I think it will be good to go. Like maybe sometime early next week. Okay. Sorry. So I think, yeah, let's start lighthouse get seem like the two, the two most ready ones. Then we can add prism to that. So like prison get, and then we can add another mine. And, you know, try out another mine lighthouse and another mine prism. I think if we have even like, you know, just lighthouse get up and running with like some infrastructure by, by next week, that's already a good start. And then maybe in the week after that, we can add prism and another mine to it. So I think, yeah, if we, if we didn't have like the four, the four clients like the next two weeks or so, that would be, that'd be really good. Yeah. I think this would be epic. If you get this out before Tuesday. Huge. Any, I guess any other questions or concerns people have about getting this out? I had one question about the CKCG library, but do we have test vectors available? Sorry. I've been out for the last week and a half. So it's already been answered. Yeah. So there are test vectors available. Yeah. Yeah. Yeah. I think this would be epic. If you get this out before Tuesday. Huge. Any, I guess any other questions or concerns people have about getting this out. Yeah. Yeah. So there are test vectors for the blob generation. And I've just added some for the verify KTG proof after. What if you're in JS told me. Thank you to the test vectors. I can generate a release schedule that just changes them into JSON files. Yeah. Yeah. So right now you have to do go run. Start up go. Yeah. On the JS KZD wrapper. I also just ping Dan in our company Slack. Just to like reemphasize that we want to cut a new release. Kevin, Kevin, you think that that's likely to fix the issue? Yeah. I think that's the main issue right now. Yeah. I think that's the main issue right now. Yeah. I think that's the main issue right now. I'm going to, because if Dan cuts a new release, I can run it against the test vectors, which work for go KZG. And I can just quickly see that. Yeah. Something KZG. We'll get that out today. All right. Thanks. Sweet. Anything else on the DevNet. Okay. Let's do it. Let's try to get a couple clients running by next call. And then a couple more by the end of next week. Okay. On the block, large block testing, Georgia says he couldn't make the call, but he gave a quick update on the discord saying that it's ready to run. You have devops is prepared. We're trying to do this tomorrow. Anyone else have comments or thoughts about this? Right. So hopefully we get the proper run done tomorrow. So, and eventually some more during the week. So then next week, we can kind of review them and figure out if and how we want to approach main net in January. Next up. Gabby had an update on the builder spec. Yeah. And I see you posted. You posted a hack and be in the agenda. Do you want to take a minute or two to walk us through this? Yeah, sure. Hi, everyone. I'm Gaby. I've been working on the, on the builder spec changes with Jimmy. We are participating on the. On the protocol fellowship and this is our first time doing the spec changes. So, yeah, any, any feedback is appreciated. Yeah. The changes are not super big, but we need to update the, both the beacon API to include the types for the 344. And we also need to do some changes on the, in the right API. Specs to. Yeah. Those changes are really well summary on the, on the PR, but to give a quick overview, we needed to include the, the commitments on the fork. For virtual container, the signer, signer builder view. This is the, the container that is returned by the, get the execution payload. And from ambiguous. And then we also need to include the, a new endpoint that we decided to do a version to, for the submit blinded block to also include the block cycle. And with a new signer beacon block and block cycle. Container. And yeah, the, the one open question we have for, for that last change was, having this version two was the way to, to vote because we get some early feedback from meta grief that there was some reference for not reusing for reusing the system API codes. Instead of creating a version two, but then also Enrico suggest that he, he saw that, yeah, having a version two in this case makes sense. So that's one thing that, yeah, coming something, some other feedback will be great. And that's for the, the questions and finally to mention some action items that we, we were thinking to, to do, for what is implemented with their spec changes on the, that are led to be done on the lighthouse code base. And then once we have that ready, generate some test vectors to share with the flash box team to have some early, early testing, because a show, shown from lighthouse share with us, but that the previously there was some, some issues with other folks. So having that, that kind of a test early was actually good thing to go to have. That's everything. Thank you. Cool. Thanks for, thanks for sharing. Alex had some comments in the chat. I don't know Alex, anything else to add. Yeah, I was going to say that I think we can get away with the V1. I was looking at like the versioning rules and basically they say, we only want a new version if we basically break what's there, which we wanted to do. At least we wouldn't need to do because we can just add this like metadata that says this is like a four for four block versus something else. Either way, generally from looking at this document, it generally looks good. And I'll go review the PRs. And yeah, nice work. Thank you. Well, I think it's that unfortunately, she me couldn't make it because of the time, but we're going to be in parallel. Thank you. Of course. Anything else on the builder specs? Okay. Last thing I had is some basically the benchmarking stuff we talked about last week. I don't know if there's been any updates there that anyone wants to share. Yeah. Yeah. Yeah. Yeah. So I think Marius ran the benchmarks on his computer. And it in the worst case, we've got around 60 K. But this is like the worst case. And I'm thinking maybe we should reevaluate how we do the precompile. Benchmarking, like maybe take 10,000. It's a rations of the average case. Does anyone. So I think to add here, like the worst case here doesn't mean it's a worst case in terms of the data. But it's just that. Like on the same data, it has. A slight distribution of runtimes, right? Yeah. And that's because of garbage collection rather than anything to do with the actual company. Right. So there's no way you could provoke like it's to do like. A hundred times the worst case. If you include the precompile, a hundred times. I wish we had Marius or. Mero or Martin. On the call. I feel like. They probably have a good intuition. Or. Like what's a reasonable. Yeah. What's a reasonable part of the distribution. Or get. But yeah, we can. Have that conversation offline. Yeah, that makes sense. I guess we're also waiting for other clients to. Post what their benchmarks are. I think most other languages also. Has a garbage collection. So. Might be the same results. From Rust. And like I was saying, Nethermine has some intermediate results that they can post later. So, um, yeah, it'll be good to have a scientific check across at least two, two clients. Sweet. Anything else on that front? Okay. Anything else anyone wanted to cover? Okay. Well, we can wrap up here. Like we said, last week we'll have a call next week and that'll be the last one. Uh, this year. So yeah, thanks everyone. And, um, yeah, hopefully we can get this. Launch. Oh, yeah. I'm not going to be here next Tuesday because I'm getting surgery. Um, but I did want to just, and Danny's, Danny's going to be running the call. Uh, cause I think Tim's not going to be here as well. But I did want to just say that. Like six months ago, we started this effort. I think it felt like a long shot that this would happen in like 2023. And now here we are six months later and we have every client, like close to interrupting sometime in January. And I think our gearing toward the place where we're all going to be in person. Uh, like to make that happen. Uh, and then have a like commitment at the core dams level to make that happen. Uh, like to make that happen. Uh, and then have a like commitment at the core dams level to ship this as a fast follow, ideally in Q two of next year. And like, I just want to say, I've never worked on Ethereum core development before, but that feels like an incredibly remarkable achievement for like six months. And I don't think that that would have been possible without everyone in this room, like showing up every Tuesday for the last six months to make this happen. And like being super positive and super engaged. And, um, I just want to say thank you to everyone. I'm not going to get to say it next week. So like, thank you for all of your hard work. Um, it's been an honor and a joy to get to collaborate on this. And I'm really excited about, um, bringing it to production and scaling Ethereum together in 2023. Thank you. I don't think we can end the call on a better note than that. Um, so yeah. Thanks, Jesse for, for wrapping up this way. Um, Thanks everyone. Thank you to everyone and talk to you all at the, the discord. All right. Thank you. Bye. Bye bye.