 Okay. We're close to the holidays. I don't know how much bigger of a crew we're going to have. So, I think we should go ahead and get started. Okay, this issue. 689 on the PM repo. There you go. We'll talk about some lingering spec items which I think are very near just clicking the merge button. DevNet 3 updates. I'm not sure who's going to give us the update, but someone can step up. George is going to do a quick on the large block spam test. Quickly revisit the pre-compiled benchmarking and then if there's anything to hit on the readiness checklist as we move into the new year. Okay, so we had discussed this many times on how to handle unavailable data outside of the prune window. So, the pr up 3169, that will be merged today, which I believe reflects the general consensus on this, both in this call and on the consensus call. If anyone has any final comments, please say so now or jump into the issue really the next hour or so is it's time. Anything on this one. The other one also is kind of a last call in that how to handle this edge case where in certain contexts you might not be able to get a sidecar whether that's past the prune window or outside of, you know, the 444 fork depth, or whatever it may be. So the general agreement here was to have a particular error code for the resource not being available, and you can try on the non unified beacon block and blob sidecar. This was by coupling this was a known edge case that we're going to work through, and there's a PR 3154 with this. This is also in the state where it's going to be merged. So if there are any comments on this or otherwise, if you don't have them here, jump in really in the next hour or two. Any comment. Sorry, was the PR number again. This is 3154. I'm going to drop it right here. Okay, and then I did open up an issue that I shared. I didn't notice this earlier but just where we're doing this data availability check in the spec is a bit strange with how the specs when designed thus far, it kind of brings it's like hidden, and should be cashed input to the state transition function or hidden, hidden to me not the right word but an additional input to the state transition function other than the block and even a data availability sampling would be kind of like a weird dynamic call to the network. How things have been designed. It's more appropriate to go and kind of fork choice and be a blocker on getting the tree in place. Okay, from an engineering standpoint, you know things are done, probably in various different places and results are cash for example you know you do, you do parts of the state transition function when you're checking gossip like the proposal so I'm sure you probably cash that. Then you go into your fork choice so like the actual like implication on engineering I think it's pretty low here it's more on getting the spec in a slightly more standard place and where. This would be tested in the spec. I believe on everyone that works on the spec regularly. There's generally agreement here. This is like my girl kind of agrees here too. And, but I will leave this up for discussion. I'm happy to take questions or discuss right here, senior input by just want to kind of draw your attention to this that I likely this is going to be shipped around a little bit. Okay, please take a look at this if you are curious or want to weigh in this is 3170 on the consensus specs retail. Are there any other spec updates or spec discussions. Any pressing items on the specs, the IP or engine API. I'd be interested in talking a little about an idea you asked about up, which was rather than gossiping blobs, like, could we possibly gossip just blocks and then directly request blobs via RPC once we see the block. There's sort of a chicken and egg problem here. If you're only doing the requests on the RPC, then initial like it, it's much. You don't necessarily have like the blobs well seated in the network, and you end up kind of just hammering exactly who just sent you the block to get what you just asked for. There's a lot of strategies on gossip networks where you kind of you, you, and we have this and gossip sub kind of implicitly where you, you push to some and you announce to others. So, you know, it might make sense to find a more explicit hybrid strategy here where you're the push announce ratios may be different. But I think that what you end up with, if you have purely only announce on that is that you end up with like a much slower and stunted gossip, and if those blocks are getting far ahead of where the side cars are. There's some people that don't even have what you want and then be searching around for it. So, does that make sense in that you like you kind of need some amount of push to get the network seated here. Yeah, like the way I would think it would have to work is you wouldn't be able to propagate blocks until you had a blob, and then you would always ask whoever just gave you the block for their blob. So I could definitely see that being really slow. But on the other hand, like not. Please. I was going to say, on the other hand, like not gossiping blobs would reduce the bandwidth concerns and then it also generally structurally looks more like what it sounds like full data availability sampling might look like. I thought it was an interesting idea. So, I guess generally what we're trying to do here is potentially increase the rounds of communication but reduce or eliminate the amplification factor, because I, I know, at least if you're honest, if you've given me the block that you do have the blobs and I can ask for them. Now, if three other people give me the block because the amplification factor I don't ask for them. So then I, I kind of take some round trip hit communication hit on each step to eliminate the gossip implications that is that kind of strategy here. I mean, that sounds like it could work. I don't really know I just thought this, like, line of thinking or something that was interesting to explore I guess I get it sounds like it's been explored to some extent but I mean this is already in part of the spec right that we're not supposed to announce the blobs. It's not implemented indebted but it's part of the specification as far as I recall. I know we send like the blocks with the blobs together so it's like, I guess an actual blocks. Oh, so this might just be just for the transaction gossiping then rather than a block block. Yes, yes, there was a trade off on that one. But transaction gossip is arguably less timely. And so the kind of like announcement and the round trip is more okay. At least can be handled I think with a little bit less care than with the block here. I guess Sean the other. I think that could work. I think it complicates our gossip rules a little bit although I think we can kind of shove it in there. I think it, it's going to make each hop take longer, but it's going to reduce greatly reduce the amplification factor on gossip. I think the other thing to consider here is the because episode attempts to try to reduce the amplification factor of all gossip but in a more kind of generic way. But I'm certainly willing to kind of like put this on the table as we discussed things in January might be worth very worthwhile. Yeah, I'd be interested in like testing it out experimenting a little bit. Maybe or you or anyone open up an issue. I know this was in more of like a DM chat. No, I don't think so. Okay. I can open one after the call. Yeah, be good. Thank you. Any other initial thoughts on that. Not for me. So let's take it to an issue and discuss I think certainly there are probably a few different strategies to reduce the bandwidth here that will be one episode will be one. Erasure encoding, you know, but we can let's let's keep that conversation going in January. Thank you. Any other spec items. DevNet three updates. I am not, I've not been following this. So I, and I'm not sure who on this call has the info but if you do, please speak up. I guess I can structure this by requesting updates from people that have been working on the various clients. I can start with prism. Terrence already posted like his update on the, on the implement is called PM, but basically prism is pretty much up to date with the DevNet. There are like a couple issues like what prism Terrence highlighted blocks by blocks by route isn't quite implemented. But I don't think I think that's something we can do eventually and it's not necessarily for the DevNet to get it operational because it's like covers one edge case for the DevNet. But yeah prism is ready and Roberto how's the, I think, Ergon you were working on. How's that go. Yeah, it's it's, I've made considerable progress. It's not ready yet though. Maybe by the end of the week. I'm hitting some pieces to Kodo that are quite different than Gath to set me back a little bit. And speaking of Gath. Gath is also pretty much implemented. And is Alexi or for literally in the call. Maybe we can get us an update on the, another mind. Yeah, we aligned implementation with the latest Gath and prism changes and looks like it's, it's working. There are some known issues, but not very critical. It's still be able to run in the network. Awesome. Yeah, I'm still sorting through a couple of sync bugs, but I think I identified the problem as of this morning so going to try to fix that and test it today. Other than that, we should be there. Excellent. And your other client does in the call want to share the updates. Yeah, lots are so we have been successfully able to interrupt with Gath and as well as generating a transaction and sort of including it in the block. Mostly we are ready for by interrupt. Excellent. Thanks. I think I'm here. It was ending in just as well since entry is not here. Or even just as well, we had made a considerable progress. And we are also able to interrupt or even just with loadstuff. And I think it would also be able to join the draw. Great. That's good news. Any other client does. Yeah, we're tackle. We're progressing on on the storage for blocks side cars. And we are progressing on the networking for the RPC methods. So, currently, what is still missing as important is the sync logic. So, but everything is, is kind of progressing. And we expect to be able to join some test nets by hopefully by the end of January to be ready for, for, for the meeting. Well, thank you for the update. I think that covers all the client does. Some extra news about the dev net. We actually do have a tentative and I'm very particular the word tentative here. Because we only have one client combination working, which is geth and prism. We have a dev net deployed adhering to dev net V3 spec with those two nodes. I would like to add more clients, including specifically never mind lighthouse since they're the closest, and I guess, old star into the dev net, so that we can get more like we can be able to test the behaviors of like client interop. I'll post the details of the dev net configuration and parameters on the EIP 4044 testing channel. But yeah, hoping to see, get some more contributions here in the dev net. Is the net already running. Yes, it is. Okay, so maybe we can try. Think of the net locally and that would be a good idea. Yep. Thank you. Nice. Are you sending blood transactions on this DevNet. Yes, I am. And yeah, there's like a hack and use a guide similar to the style of DevNet V1 whereby you can interact with the dev net. We have a bunch of public endpoints exposed so developers can start building some tooling similar to the previous dev net and top of this one. Yeah, it all works out. Fantastic. Great work. I have a quick question. I'm looking at this DevNet 3 doc and look at the milestones and then at M3. How did the EL plus CL interrupt test vectors work? I apologize, I'm not familiar with this. This is your repo Mophie so I guess I'm asking you. Okay, so that I'm basically we have a suite of tests. It's similar it's styled like hive tests, but they're more succinct. For every client implementation we add that to that repo and execute those tests and if they're passing, then I guess they're good, at least for the most part. We've already gotten some contributions to get those tests passing for various clients but we've been running into a couple issues. We've been integrating the clients with the interoperate repo. Gotcha. Yeah, the hard problems. Do just for my edification or any so, like our gath and prism passing M3. Gath and prism is not passing all of it. There's one particular test in prism that is not passing which is historical sync. Okay. I suspect the issue is mostly on the client side. It's more of a technical issue than a consensus critical issue, which is why but for all purposes, prism and gath, they should be working fine for the, for like the DevNet. Okay. Thank you. Any other discussion points for DevNet 3. Great. Thank you, thank you everyone. Large block spam test we have a status update here. Maybe Georgia's, maybe Georgia's is not here. Does anybody have any visibility on this I know they're going to be running another wave of tests with the additional monitoring up but I do not know the status of that. Okay, well we can circle back outside of this call and see how the monitoring work here. I guess to contextualize the bandwidth reduction proposals whether it be episode whether it be some different push pull strategy. These types of tests and simulations will hopefully help us help inform us as to whether we want to add additional complexity by following one of these bandwidth reduction proposals. Okay pre-compiled benchmarking. Kav, are you still here? I saw that you may be in a dropout. Hello. Yeah, I'm still here. Yeah, I wrote in the chat. Nevermind gave some numbers last week for 52K. So it's closer to the original estimates. And yeah, Nim and Java client, I think we're still waiting for estimates from them. Okay, but we're increasingly honing in on that 50 to 60K number and nothing, nothing unexpected on those benches have shown up after we resolved the negative case. Right, right. It seems that we just need to optimize to go KZG client a bit more. Okay, because that's like 67. In the best case it's 67, but the worst case it was going to more than 100. This is the garbage collection. Yeah, I was just doing a lot of allocations. And what's the average? For go KZG. Yeah. I can't remember what Martin said. And this is with which library. I don't, I don't remember what the default one it was using it might be kill it. Yeah. Okay. Yeah, we've got an arc I think the, the worst case was better than go KZG's best case. And so there's like a big difference. And help me understand are the allocations. Are there lots of allocations in one call, and thus the garbage collection can be kicked in, or is it because of the benchmarking and repeated calls that kind of ends up being allocation blow up that hurts some of these calls. It seemed like it was in one call. Okay. Okay, but I guess other libraries coming in at the numbers that we expect the signal to fix the library that doesn't rather than to tune to that library. But we can continue this in January. Thank you. Anything else on pre compile benches. Excellent. I guess the link that Tim had in here was just linking out to the readiness checklist. I don't have particular items on here. It looks like testing is certainly pretty important thing with respect to hive transaction pool, especially on the execution layer. Any updates on testing just in general. Okay, just. Okay, so I did update the notes with the progress of hive testing. So there are some good peers from Mario might put up some peers. I guess we added support for goal workspaces in hive. So now we can de duplicate the does not related codes of these types simulators, meaning that the, it does not that basically runs in the simulators that spawns these clients. For, for, for, and for other, like, future EIPs. Now, aside from the workspaces in the code to duplication, it's also been working some extra features like metrics support in hive. My hope is that eventually we'll have some benchmarking in hive, where we can automate metrics. That'd be really useful for the blobs benchmarking. And I believe Mario is working on the withdrawals testing. With that in place, I think we can basically on top of that also implement for for testing. That's the sequentiality here. If we're going to test the, the engine API of like posts are high. Then we might depend on the, the withdrawals testing. Got it. Okay, any questions for proto or any further comments on testing. Okay. Any readiness checklist. I'm going to do a pass on this, for example, the setting and then gas price is still in there with that pure as closed a few things like that. Danny, can I circle back to testing real quick with a quick question. Do we have an ETA on updated retest ETH cases, specifically working on the KZG pre compile contract. And I was wondering if anybody had or if there were plans to start working on ref tests for that same same question basically extends to all the different types of functionality we need to add. Yeah, I can circle back on this outside of the call. I'm, I know the intention is there. I do not know. I'm going to be at the immediate top of anybody's list. Marius you have any visibility on that. If you're speaking, we cannot hear you. I'm going to make a note on that and follow up. No problem. Thank you. Okay. Anything else on the readiness checklist. Really any other items we want to discuss today. Excellent. Thank you for those. Thank you everyone. Happy holidays. We will reconvene this call. I believe the first week of January. Yeah, that's January 3rd and Tim will again be leading the call. Oh yes, thank you for writing us. Okay, cool. Happy holidays everyone talk to you soon take care. Happy holidays. Thank you.