 Okay, let's get this off. We have a group of people here, and I'm sure more will join in. Welcome to the fourth of these 4844 implementers call. We have a ton of spec stuff today as always on the agenda. And then hopefully we can get to some updates on DevNet 3 on the large block test that we wanted to do. And then Terrence had a couple of PRs that he's put up that he wanted to get feedback on, if not on this call, at least async, so we can go over those quickly. And maybe I guess to kick us off, and Gar, you gave some updates on a couple of your PRs in the agenda yesterday. Do you want to just take a minute quickly to sort of walk us through where those three are at? Sure, yeah, yeah. There are always like a few of these small ones. Also just to check my microphone works well this time around. Yeah, great, great, great. So basically, I think it's four, but one of the PRs didn't have a status change. That's the one where we just discussed whether or not you have the minimum price. So the three were actually something happened. First, the kind of throughput reduction PR is merged. So now the target and max are like a quarter megabyte and a half a megabyte. The understanding of course is that once we collect metrics, it could be that we end up setting like a slightly different throughput for bringing this to mainnet, but I think as a kind of default, this makes more sense now. And then the second one is the pre-compiler, the pre-compiler return values. That one is still not merged just because I had like some concerns last call around the formatting of the return values. And I looked at it a little bit more into this and it turns out indeed that there's some precedent with existing pre-compiles that handle this differently. So there's this VN128 pairing check pre-compile, for example, where the return value is padded. Also it doesn't return explicitly return a successful and instead of failing if the pairing fails. So I've reached out to the Solidity team just to basically get their take on whether or not it makes a difference and whether or not it's worth basically following precedent here. It's a small change, but I think for this given that like all the test is like it doesn't really, like nothing depends on this, right? So I'd rather wanna get it right and merge the thing that we can actually bring to mainnet and not have to change it later on. So there's this one is still pending waiting for clarification there. And then the last one, that's one that I had forgotten last week was the mempool situation with not automatically broadcasting block transactions. I did, I opened a PR that adds a dependency on Marius's EIP 5793, which is the ETH 68 transaction type announcement. But I do have one question. So maybe if we could briefly discuss this here. So the way the EIP works is that it only adds the transaction type and the size of the transaction to the announcement message, but it itself is not like does not prescribe whether or not clients still broadcast these transactions. So this change itself basically just says in your announcement message, you have to include this information, but then basically whether or not clients still auto-broadcast these transactions by default or choose to no longer auto-broadcast over certain thresholds, that's up to clients. For now, I basically also kind of worded this way and just give it as a recommendation to stop auto-broadcasting. I was just wondering if people feel strongly that we should basically have the A requirement required behavior for clients for block transactions. So this did not just have it as a recommendation, but a extra requirement. I don't know if anyone has opinions on this. Otherwise, I think having it just, yeah, the recommendation makes sense to me, but... Yeah, and I guess maybe once we do this like blob spam test, if we see something break at the mem pool level or gossip level, then maybe we wanna make it more than a recommendation. Yeah. Marius doesn't look like he's here, but I'm just referencing back to the execution layer workshop at DevCon. And I think the key thing we were trying to mitigate here was DOS risk of the execution layer and basically giving execution layer more optionality in whether they were gonna take in blobs or not or large transactions or not. And so I can imagine Marius saying that it actually is really important for it to be a requirement rather than a recommendation. But I think we probably need someone like him or Peter to really weigh in on that, of how strongly they feel. Well, I'm not sure that helps because if someone's trying to DOS the network, they're not gonna follow that anyway, right? They're gonna broadcast blobs like mad. Yeah, but if it's a requirement, not a recommendation, I think the idea was basically you could terminate the connection, the peer connection. Yeah, you can just stop remaining. So they can broadcast blobs as much as they want as long as they're connected to the peers. But if every peer drops them, right? Yeah, but anyway, yeah, I guess we can revisit it. It seems to me like you could drop them anyway, right? Regardless of you've noticed some spamming and dropped them rather than you noticed. Otherwise, how do you tell that they're actually broadcasting something? Right, yeah. Yeah, I suppose though, if everyone's required to request rather than, yeah, I suppose so. Yeah, maybe that does help. All right, sorry. Could somebody articulate the concern in like concrete kind of conditions that we're looking to violate from this? I guess that has been still a bit abstract for us. What are we trying to find? What does break mean? So I guess if nodes are just unable to process the blocks or get put offline because of the increased bandwidth requirements of getting a bunch of blobs that are either invalid or, yeah. Are there's concerns written somewhere we can point to or can somebody own? I suspect if they're somewhere already, they would be in that EIP Ansgar link too, but let me check it real quick. Well, it's basically just the existing mempool issues section in the EIP. It's a really small section, but the idea is just that it's not only bandwidth, it's also compute given how expensive the block verification are. The idea is just that you want to have some mechanism to throttle. Ideally, you could also do that in an existing world with peer scoring, but the execution plans just don't have a concept of peer scoring. So by the time they would basically validate blocks and see that they, for example, they're invalid block transactions in there, they just don't keep track of which peer this came from. And so that would mean a complete re-architecture taking of the mempool to be able to handle this. So instead, if you do it with announcements, then all of a sudden you have explicit control over a, okay, I only pull block transactions from this peer once every so and so many seconds or something. And then you don't respond. Sorry, Ansgar, even more answer, zooming out, I guess my point is that the stress test that we're doing is not testing blobs, it's just called data transactions. And I think I heard that the concern is not just bandwidth, it's also compute because of the KSG verification cost in the mempool. And that's something that we'll be testing with this stress test. Is it, does everybody in this group understand that this is the case or have we been, have we not? So I think the stress test actually tests something different. We have two different concerns that we have the concerns about the size of the actual assembled blocks. And then we have the mempool and those are completely separate. And so what we are testing with that stress test is at the size of blocks because that is consensus critical, right? Everyone has to download these blocks and so they might be kicked from the network. The mempool side of things is somewhat more forgiving because if you make a very constrained bandwidth situation you don't have to actually run a mempool, you don't have to propagate all these transactions. But on the compute side, for example, it's much more problematic because on a block verification the compute is really negligible because you do one check for the entire assembled blocks but for the block transactions you have to do one check for transactions. There might be hundreds of transactions coming in. So it's basically, it's much more of a compute issue there. So it's again, like those are just like the stress that we're doing is just for the block propagation that's separate from mempool. On this one, I feel like we probably need someone, we probably need Marius to weigh in here because he was the person who had a strong opinion on it. And given he's not here, do we wanna push this async and ask for his feedback on the PR? Yeah, I think that makes sense. And I think the test also like, yeah, the test, even just on the bandwidth side is really valuable, right? Yeah. I'm good, we'll get it done this week. I'm writing this today, we'll try to run it on girly by the end of the day, might start doing more coordinated tests over the week. So, we'll give you a chance. Cool. Also, just in terms of philosophy, very briefly, I think it's important to point out that mempool concerns are a little bit less of an issue because we can always launch the CRP with a mempool that is very restrictive and maybe a little bit less efficient in propagating transactions, but doesn't actually break nodes and then make propagation performance better in the long run, whereas block propagation really has to work, right? Because the entire network breaks down if it doesn't work. So mempool is more forgiving. Okay. Okay, so those were your three, Ansgar, and you said there was a fourth one, sorry, that, yeah, you said there was a fourth one that you hadn't gotten merged or given an update on. Oh, the minimum cost, okay. So just no update there. Exactly. Okay, perfect. Okay. So next up, what was this one? Oh, okay, Danny can't make it, unfortunately, but he was reviewing the spec yesterday and highlighted this issue where I think two calls ago, we decided to not verify the blobs as we are propagating them. And Danny was saying that there's a way in which just means any node on the network might be able to propagate an invalid blob, which is not something you would like. I know Mofi, the two of you were chatting about this yesterday. Do you have a quick update on where things are out there? Yeah, I think it's, we sort of like figured out that it is a concern. The options we have is to either use reliance signature verification to deter invalid site person being lost it or rely on the KCG commitments. And based on the contract, I'm leaning towards using signature verification as Danny proposed in the PR. Yeah, open the feedback there from anyone else that has it with you. So we were looking at the times it takes to do the KCG verification versus signature verification. And I think the difference for 256 kilobyte blobs is like two milliseconds. And I think that's probably not significant enough to warrant the signature just because the signature requires the changes to the beacon API and the validated client. And like it'd be nice to avoid those changes if we don't have to. But yeah, I mean, if we end up going with bigger blob sizes, it would be okay to add later too. So sort of my two senses like maybe not, maybe we shouldn't prematurely optimize for this because we also could do things like figure out KCG verification of the blob in parallel with other aspects of gossip verification. Yeah. So you're saying it's like almost because the blobs are not that big now. It's not the end of the world if you're not doing the signature check. Is that correct? Yeah, so we do need something and it's either we need to do the KCG verification or the signature check and the KCG verification is more substantially slower the bigger blobs are. Right, okay. If we're targeting relatively small blob sizes, it might not warrant signatures yet. If, you know, things are going well and we're like, we actually have a lot of headroom, I think it'd be okay to like add signatures later because signatures like it impacts the scope of work by requiring changes to the beacon API and validate our clients. So... Right, right. I guess I'm curious if other client teams do any other client teams on the CSI have thoughts on that? Yeah, I agree. It feels like a premature optimization. Okay. Is this worth, I guess, bringing up on the CL call Thursday or should we just comment on this issue and try to resolve it async in the next couple of days? I mean, I think it's still worth bringing up again, but yeah, I'll reply on the issue. Awesome, yeah. If you wanna share that on the issue, yeah, that'd be great. And then, yeah, we can follow up on it on Thursday's call. Sweet. Okay, so next one after that, I just wanted to follow up, I guess, on this rebase PR. I know we've been working on this a lot and there's comments on it from yesterday and today, but yeah, anyone have anything they want to share, bring up on this or... And just to reiterate the point of the rebase PR, the point of it is to make it easy for client devs that are already working on withdrawals to add 444 on top of that. There's a clause in the spec in the PR to make it easy to disable withdrawals just for testing. And I think Danny brought up a comment of like, if that's like, what are the other ways of doing it? And the approach we went with is to simply know up withdrawal sensitive functions, such that there are no withdrawals in the big block and the withdrawal route is whatever the route is like an entity should be. So that's kind of like where we're going with testing. This lets us avoid introducing withdrawals in the EL so that we can test EIP 444 on ELs that haven't quite fully implemented withdrawals on as soon as possible. And also it lets us avoid any bugs in EL or the CL that to be sure that it's a withdrawal. Okay. So I guess, yeah, we can just keep following up on the PR there, but it doesn't seem like there's anything urgent to decide here. Okay. I guess, yeah, the next thing I wanted to make sure we covered was just definitely three. So last week, all of the, or I guess, yeah, 16 teams said they were trying to get, yeah, get this implemented so that we could launch a DevNet on around November 30th, which is two weeks from now. So yeah, I'm curious to hear about just the different implementations, how things are going, if people think this is still possible or if there's any issues that seem to block this. I can give an update on some of the work I've been doing. So I've got DevNet V3, I've got Aragon is in prism updated to most of the new spec. The pending issues are the PR we just discussed, some of the Capella rebasing as well as I think we still made a couple beacon, the sidecars with the beacon blocks and the consensus layer. So those are pretty close. Those are in great shape. I've also been spending some time pulling out some of the remaining KZG code, mostly from the execution layer into what I hope is a clean library. That's in the crypto slash KZG package within geth. Even though it's in geth, I'm using it both for the consensus layer as well as the execution layer, KZG related or EIP4844 related functions. So all the go clients can hopefully share that. Likely that stuff will move into the go KZG or some similar external library, but for now we're trying to get it right there. Once it's ready then we can then think about moving it out. And I think there's some little bit of discussions like should it contain version hash related functions or should it just be KZG? So there's some discussions around the edges. But anyway, I think it's in pretty good shape for people to start using on it if you're writing in go anyway. And I'll be starting to integrate that in Aragon this week. Nice, pretty cool. Any updates from NetherMind? Yeah, so we are integrating everything like in master branch. No new guest calculations still there. And probably we have no bundle. We won in point being added. But everything goes quite smooth. I hope we will join with everything needed. The difficulty, actually the main difficulty was to repeat all the binary layer of encodings, new encodings we have in this EIP. I mean SSZ encoding. Well, what else, different hash algorithm and like a bit different layout for transaction encoding is this, what, slow submit. But in overall, it's okay. The main problem I see now is we still have no same outputs as go KZG library for CKZG library because as far as I know implementations a bit different in terms of like getting some internal hashes because it is based on different like a binary layout like that. So that's why it's hard to synchronize with guest and the other government implementations now. And we just skip verification for now. But I hope we will find, we will like wait for update from go library and we will have some tests for both libraries with the same inputs and hope to get the same outputs. That's the only issue which is not resolved for now. Got it. And who is working on these like harmonizing these libraries? I know, yeah, I noticed a bunch of people working on them but yeah, is this just something that we're tracking or I don't know, yeah, Kev, yeah. I started with test factors but I don't think we've started integrating any of them. I can definitely speak with LXC offline to sort of interrupt the go KZG with the CKZG wrapper that he's using. Okay, nice. Would that library I have in Geth be a solution to this? Yeah, so I'd basically just make sure that that is interoperable with whatever LXC is using. I had a question about the CKZG library. So like are we expecting like all the bindings for different languages to be hosted on the CKZG repository? It's like who would be responsible for maintaining the different language bindings? Like being the CKZG repo or like for Rush, like maybe as a Sigma prime, we can have it in our own repository like the bindings for Rush, something like that. So just wondering what other language bindings we're thinking of. Yeah, so right now the go and the CKZG stuff is pretty separate. And even the go KZG stuff is somewhat scattered about. So I think the short answer is no, we don't have a good story for that yet. Whether we should consult, I'm not sure we should consult, we need to consolidate it all on one. We just need to make sure that they all work together, right, work the same. But maybe people have other view. Yeah, we were discussing this today actually, because we are starting working on the Java binding and we were asking ourselves if we want to have a separate repo for the Java binding or try to initially stick everything in the current binding directory in the library. And I think if it is the idea, I would prefer the second if actually the end state will be having everything in the same repo. What do you think? Yeah, makes sense, but like another question I had was like, in the end, like maintenance of the CKZG library itself, will it be done by the year? Like, so like whoever maintains that library would also have to maintain binding for different languages. So like that might be hard or not, like I'm not sure about that as well. Well, the solution will be, it will be maintained by several groups. Maybe someone are more CC and crypto oriented, but maybe the bindings inside the directory will be mostly maintained by different groups of people and I don't know if this is the something that makes sense. Yeah, makes sense to me. Like I made a draft PR for the Rust bindings today to the CKZG, Dan Cradd's folk of the CKZG library. So like that was our intention, like at least from Sigma Prime, that we would want to get the Rust bindings merged into that repository and like anytime, like in terms of maintenance of the bindings itself, we could help with like reviewing PRs and stuff like that My understanding from the latest conversation in the telegram group is that Ramana is going to maintain the kind of core crypto and C libraries. And then that, just like you guys are suggesting, we're going to have kind of separate people maintain the bindings, but all in one place. Yeah, I just wanted to clarify what I added in the comment was, I think if it's just bindings for the CKZG library then it probably doesn't make sense in the same package. Go KZG, however, it's not a wrapper around KZG, CKZG it's its own thing. So that probably belongs, should be remain separate. Yeah, so Rust have a completely different implementations and not simply a wrapper, am I right? I think there are like multiple, like pure Rust implementations also around, but yeah, like I was talking specifically about the bindings to CKZG. Okay. And there's also a comment in the chat about like working on common test vectors. Is that something anyone is working on or wants to work on? And we don't, yeah, I guess we don't have to do this now, but I know in the past, like for say BLS, we would fuzz all the libraries against each other pretty extensively. So that seemed like something that it's probably valuable to do as well, but yeah, it doesn't have to happen now. Okay, and I guess, yeah, I'm curious to hear. So this was mostly for folks on the EL side with regards to participation in the DevNet. On the CL side, we have Prism, Lighthouse and Loadstar, say they would participate. Anyone from those team wanna give a quick update on where things are at? So for Lighthouse, we're still planning to participate in the next DevNet. Right now I'm working on the peer to peer portion of things and Pawan is working on integrating the REST bindings for KZG that he just finished with it. And yeah, I think we're on pace to get there, so. Nice. For Prism, oh, Tim is, yeah, I'll, I guess, speak for the Prism team and the stuff I'm working on. So the work is basically incorporating all the, the for-for-for rebase on top of Capella. We are basically going ahead with what I have in the PR for the rebase targeted data soon it gets merged. And also working on adding like the logic for the rebase on the IP4.04 on Capella. It's still going just a lot of development and refactoring work going on. Nice. Loadstar? Yeah. Loadstar I think is on track to be included in this DevNet. We now have Lyon from Loadstar on this call who might want to be giving updates in the future as I try to hand this back to ChainSafe. I think it's looking good. And we now have Loadstar integrated into our interrupt testing repo with only the first of our test suites running against it. But it's using the updated gossip topic and the chain advances and that all works. It doesn't actually save the blobs yet but most of the foundation is there. Sweet. And I guess anyone else think they might join the testnet or still those six teams? Well, small update from Tegu. We are working contemporary with progressing with Capella and for it for four. Actually Capella is definitely far ahead with the works compared to the works for the 4844. So we are progressing in parallel, strictly and pushing things directly on master. So things are progressing but I don't think I'm gonna change the idea that we are able to join the DevNet so far. I don't think the next one will be the one for us. Sounds good. Happy to hear about the progress. Anyone else wanna give an update? This is Andrew from Ethereum.js. I mean, I'm just gonna comment that we're not gonna be ready for DevNet three. I don't think I've got a mostly done local implementation and I'm still testing it. I'm still trying to get it to work against the local version of the DevNet just from the interoperable. So until I feel like we're able to actually trade blocks and serialize and de-serialize transactions, there's no point in me trying to go to a public DevNet at this point. But I am plugging away at that. At some point it would be nice to understand the current status of the actual EIP. It feels pretty out of date in terms of like the pre-compile spec doesn't match up to what y'all are talking about right now. And so I'm just trying to under, we can talk about that later offline. It's not important. I don't wanna take too much time just to submit the definite questions about that. At some point I've talked to Kev a little bit, but could you some clarity on a couple of things? Yeah, the pre-compile is probably the bit where there's the most, like just like we were talking, there's the most potential changes. But aside from that, it should be pretty fixed. Okay. All right, well then I'm gonna look at it again. I thought there were a couple of places that I went astray. And Andrew, feel free to reach out to me. I'm trying to keep on top of all these things to keep up. Yeah, it will help out. No, that's cool. I appreciate it. I was still just trying to get all the code in place to even start testing it within a while. So, I agree. So, but maybe in December, that's probably the earliest I think with travel plans around Thanksgiving, I'm not gonna have a lot of time to work on it and get ready for DevNet 3. All good? Yeah, thanks for the update. Anyone else wanna share any development updates? Okay. If not, moving on. The next thing I want to chat about was this large block spam test. George's posted an update on the agenda about the way they're thinking about doing this and given that we're probably gonna run a test net run in the next week or so. I'm curious to just hear feedback from folks here about like, is there anything that we're missing or that we should be looking at or do you think basically George's approach is broken in some way so that, you know, once we start running them, if we can get that feedback, or before we start running the test if we can have that feedback, that's great. And then otherwise we'll probably run this on test nets a few times and share results here before we move to main net. But yeah, maybe George, I think like a minute to just explain sort of how you're thinking about approaching this and then see if there's any feedback, thoughts, comments on it. Sure. I'm walking, so it might be a bit noisy. TLDR, my understanding is that we want to create sustained load for some time and we're gonna do it in chunks of some size, which we want to be parameter, which we want to parameter. Default base case, we're gonna do 128 kilobytes, but possibly we wanna try out bigger one to know when that would be useful, but it's also like one of those tests as well. And it's generally low cost to just make the tool generic over whatever transaction shape we want. So my idea is that we'll just make a general load test that for cold data transactions and it's gonna be able to submit either part to C. I'll end it on maybe for a builder, two by thousand, 128 kilobytes. Nothing's hoped for me to do any metrics gathering and I assume that either somebody will be watching or that they could make false analysis. Curious for any thoughts, reactions. If there's no feedback, I guess one thing that's probably worth emphasizing is right now we have a prism branch, which is specially configured to track a bit more metrics than they usually do. So we'll be obviously looking at on-chain metrics and we can run some other clients as well. But if any other client team thinks it's like trivial for them to add some extra metrics and monitoring, it might be worth just looking at what prism did and that would help kind of send it to check the data across more than one client. I don't have the branch real handy, but I'll try and find it and post it in the chat here and we can have it in the notes. But yeah, worst case, we'll just have the on-chain data and this prism that's specially configured. Yeah, oh, I found the branch. So I'll just post this here. Yeah, and I guess once we get the results from this first one, we can come back and discuss them here and see if there's anything we want to tweak before going to Mainnet. Anything else on the spam test? If not, parents had two PRs he wanted to discuss. He's not here to do so. The first one was adding this block in sidecar retrieval by route. I think it just, it got approved this morning. So I don't know if there's anything more to discuss on this. Well, something related that isn't quite this PR but is the by range request, like the counterpart for this request, whether to keep it as a separate block and blob request or combine them into a single request. I'm interested in discussing that at all. I think for Lighthouse generally, whether or not we have the requests together or separate, we're probably going to treat them as if they're a single request, even if they're specced out separately just because it makes handling things like attributing faults to peers a lot simpler. So we would generally have a slight preference for just combining the requests, but it's also not a huge deal if it's a pain for other teams. So I was just interested about other teams are thinking about this. Yeah, we're having this discussion on GitHub and I come up actually with the same reasoning about that. The only thing that we were internally discussing a couple of days ago about having a separate call is for, let's say, archive nodes that for some reason they imagine that you want to have a node that wants to suddenly start getting archive blobs and start searching for nodes that provides blobs for a deeper history. And then this node would like to start filling up the archival blobs for the very deep past. So in these use cases where you want to have those kind of nodes having a separate method definitely makes sense because in that case you have to download the blocks another time, but a part of these archival nodes, I think we think that having the coupled version is simpler. Yeah, it's an interesting point and out of archival nodes. Just to clarify, they are still decoupled. And if you're requesting sidecars that are not recent, we do have like an API to request a response to RPC to get those in a decoupled manner. It's not what you do on your own. Is that the concern for archival nodes or something else? Yeah, I was thinking that having the coupled version actually does not assume that we will have also the dedicated sidecar method. But if you think that we will have both of them, the coupled and decoupled version, this definitely, yeah, solves the issue. But I don't think it was at least intended in the first place to have them both might be right. Yeah, we only use both during gossip so that nodes can easily have like the blocks and its requirements of sidecar all in one message. It makes it easy as Sean mentioned to attribute any problems in the message to a different peer and also avoids weird race conditions where you get a blob but not the sidecar and vice versa and you're waiting for one or the other. But there is a fallback to request like a specific blob given its root or by a range request, similar to how it for begin blocks it works. Okay, yeah, so having them also having the option to have these dedicated sidecar method, yeah, it definitely works. Yeah. Hope you just so I understand like this PR that Sean shared is the remove blob sidecar by range. Are you saying that we wouldn't remove the blob sidecar by range or are you saying that there's another method that's not blob sidecar by range that would serve a similar purpose of this like blob retrieval? Oh, I understand you could. So Sean has just proposed the blob sidecar by root. We could use that instead, plus I'm missing on the use space where that would be sufficient. Oh, I mean, so I was suggesting rather than like having a block sidecar by range request, we have a block sidecar with signed block and sidecar by range request. Essentially just saying like if we're cut, like I feel like we're converging and coupling these two things everywhere. So in the one place that I don't think we have them coupled yet, which is the by range requests, should we? And I think for us like whether or not we do, we're gonna handle them as if they're coupled. So it's not a huge deal if it's spec'd out to suggest they should be coupled, but it would be nice if all the client teams are gonna handle them seriously or similarly. So was just curious about like what other client teams were planning with these by range requests. Yeah, like so I am interested how the prism implementation works right now. Do you guys like hash blobs and blobs separately? And then as a by range request for blocks or blobs completes, like pick off processing at that point, or are you just like making both requests at once? If either request fails, both fail. And then once both complete, you just start processing. Yeah, I can't really speak to present, but the way we had it working for the first two couple deaf knocks is to do the lotter, request the blobs, block, request the blobs, site cars, and if either of them fails, then it just short circuits the other. Yeah, so this is I think what we would implement if the requests are separate, but it's more or less the same as just having a single request, just like a bit corkier. And like, I mean, you do potentially have an advantage of parallel download, but you have to be sane. But yeah, that's all. And it's generally what I was curious about. Cause I think if all clients are gonna implement something similar to that, we should think about just having a couple requests there as well. Yeah. Well, at least for Prism, they do try to like avoid parallel downloads because it makes like attribution and to your scoring a bit wonky. Like for example, in the case where you make few requests, you get the box and the site car and one of them is invalid. Ideally, you don't want to keep communicating with the other peer, especially given like, if we go like with one megabyte size blocks of logs, we don't want to keep making that request. So, I imagine a lot of clients would make a request to get the block, ensure that that's valid before proceeding to get the site car in the decoupled case. But if it were coupled, then we can adjust like our peer scoring parameters to take advantage of the fact and make a single request. That would just be easier. But either way, I don't think it'd be that useful, at least not without crazy workarounds to make two requests at the same time. Try to like explain how it wasn't there. Yeah. So we can keep talking about offline, I guess. But that's definitely, it'd be nice to have some clarity around that relatively soon. And if we're getting too close to the, I don't know, DevNet, and we're not sure, we can go with separate requests. But yeah. Mofi, can we at least pick a solution for the DevNet, even if it's not well thought through? Yeah, I think the current solution, which is what the current spec is right now, will work for the DevNet. We still do need the ByRoot, but other than that, yeah, work works half right now, but we should work. Yeah, so maintaining the range separates and having the coupled ByRoot, this is the current solution, right? So yeah, also considering also what I was discussing before, with regard of the potential Blob's archival nodes, definitely have sense to maintain them separate. And in Tegu, we were also thinking about, in any case, having kind of simulated couple version of it. So you talk always to the same peer and you ask for Blob's and Blob's, and if they match, okay, but if something goes wrong, you can easily peer score, yeah, downscore the peer you're talking to, yep. I just wanna make sure I get this for the notes. Can someone reiterate what the decision we just made is? I think leaving ByRange as it is for the next DevNet, right? And then I'm in open to discussion in the future. We're not gonna merge this, yeah, the 3087 PR, that's removed Blob's sidecar by range. I believe so, to be honest, I link that more for the discussion on the PR. I haven't looked at the PR itself. There isn't a PR, it's just a discussion. It's just an issue right now, but yeah. Okay, so we're not gonna, do anything on that, okay. Is there anything else that we are gonna do or are not gonna do for the DevNet? All right, let's check the DevNet doc, but ByRoot is still something that you want for the DevNet, which is the 3089 ad block inside car retrieval by root. Yeah. And do we have a clear next step on that? Oh, I think it's approved at this point. So we just need to merge it. Yeah, that's pretty much done. Let's go there, ByRange. And the other thing was Brian's PR to, or issues, we decided to flip and we want a signature verification commitments to deter invalid long-sighted car dream gossip. I think we kicked the decision to do this for next week. I think that's probably something we want to have for the DevNet, but hopefully by next week we'll have like a decision on what we wanna do and then update the doc. Yeah. Oh, go ahead. Oh, I was just gonna say, looking back on my notes, it looks like we had a recommendation to proceed with just the KZG commitment rather than the signature, but maybe we haven't formally decided that. Right, we do KZG and we just confirmed this on the call Thursday with the other CL teams, but yeah, generate events. I think we agreed to. Yeah. So end of next week, we should know for sure. End of this week, sorry, we should know for sure what we're doing. And do we have a PR open for that one? The spec? No, there is no PR, that's actually a good point. We just have the issue from Danny, but it might be worth opening a PR before Thursday's call with the KZG and then discussing, yeah. Does anyone here want to own that? I can make a PR, sorry, just so we're clear, like the PR would be to add the KZG verification as a gossip condition, right? Correct, yeah. Okay, cool. And then on the CL call, we can discuss that PR and if for some reason, you know, people think we should do full signatures, then like we can always change it, but I think it's at least we can come to the call with like something more concrete to propose, yeah. Cool. Yeah, thanks. And I think that was everything from today. Just if we have two minutes left, there were a couple action items from last time. So we discussed like end scars at the beginning. George and Kev had this one around discussing the KZG interfaces and how the handle asserts that this happened. We have two non-blocking issues. For the crypto, so there's 3093, we're waiting for a reply. This doesn't block clients. There's 3097, which is the interface for the pre-compile. We agreed on a bytes array interface, but there's a bit of discussion around whether the version hash code should be in the crypto, but that's also non-blocking. Okay, thanks. And then Mofi merging the PR, yeah, we're basing for it for foreign Capello. We're getting close there. Mofi open, oh, yeah. Open and merge PR string to in withdrawal fields on engine API specs at VIP444, did that happen? Oh yeah, the execution APIs. I don't believe that's been merged yet. Are there PR strings? Yeah, there is, let me click that in. Nice, thank you. And then Terence had three of them, but he's not here. And I think basically the two last ones, yeah, the two last ones are the, sorry, so the two issues you pointed today, one is adding the block sidecar and retrieval by route. So I believe this is the first action item he had. And then this second one was ancestor blob availability check. And I think this was the second one, if that's correct. And yeah, that seems... Yeah, this is more associated to a spec change requiring to be more specific about what should do a validator in terms of start testing the head with regard of having all the blobs already downloaded. Yeah, this has been a discussion that started on Discord by me and then became an issue here, but then blended with all these other things around the by range methods. But I think we were converging to the option to say avoid block import. Completely if there's no blob downloaded and verified. Got it. Is there anything, we're just a bit over time, is there anything people feel we are missing you're having, yeah. I just linked one on the, which I think is the 3090 for the beacon block. I think that's in Alex Stokes's core at this point. Oh yeah, one of Alex's on the call. Yeah, he is. Yeah, this was just like deprecating the beacon block also. Correct, okay. Yeah, I was gonna make one edit, Danny wanted a clarification, but I was gonna do that right after this call and then that shouldn't be merged. Anything else, anyone feel like we should have touched on but haven't? Yeah, so I'd like to come back to this test vectors question, which we shortly discussed at the chat. And at least in my opinion, it looks like it would be really great to have these test vectors, something like we have now for a consensus layer that the theorem foundation is maintaining. And so for these at the proper level for the library interface, because looking at the chats, I see that there are a lot of edge cases where people are trying to cover and discover it. And there are multiple implementations, multiple wrappers and detailed test suit for the wrapper level will be really useful, especially if it covers those new discovered edge cases. So I don't know if there are some resources available that somebody could take care of. So that cover is working on that, but I don't know, does he have enough time to develop the full suit and follow all the details? I can add in the edge cases that we discussed. It's mainly basically going to all of the wrappers and passing the JSON and adding the tests into them. I'd have to reach out to maybe Alexey and the others, the stuff like C-sharp. Okay, so you feel comfortable to try to take care of that? Yeah, just making the test vectors is fine. Okay, that's cool, thanks. By the way, for the seal site that we will provide the basic test vectors in the spec test in the next release. And I am keen to merge the Rebest PR as soon as possible. We will talk Danny to agree with it. And once it's done, then I have a test generator PR, which is based on Moffitt's PR. So we will provide the test vectors later. Oh, and by the way, in the CL test vectors, we use the minimal trusted setup configuration. So the format is described in this PR. So just in case, if you have time at CL clients, can take a look if you agree with the format and if there are any issues, we could change it before the next release. Yep. Okay, anything else before we wrap up? Okay, well, thanks everyone. See most of you on the CL call this Thursday. Otherwise, yeah, talk to you next week on this call. Bye. Bye. Thank you.