 I think, yeah, we could probably get started. People are still rolling in, but we have a good group here. So I guess the goal of this meeting is basically to make sure we work through all of the spec implementation issues as they come up and that we're all sort of on the same page about where things are at. We're gonna have them weekly for now and we'll see, as long as they're useful, we can keep them at that cadence. And we might slow them down at some point if we feel like, yeah, there's not stuff to discuss every week. I guess, yeah, the first thing I want to go over, so there's a bunch of open PRs in the specs. And I think it's useful to just understand where they're at, if there's any blockers on them and this way the folks working on client implementations can know kind of what to expect and what to keep an eye out as they're working on stuff. So first one, this one's been open for a while and I think we basically just have to merge it but this one is yours, Ansgar, the fee market one. Oh, and then Micah left a bunch of comments yesterday. Yeah, so basically I think the status was that I don't think there was really any remaining open issue. My first time really actively working on the specs so I was a little bit hesitant to just push for it being merged but I think it should basically be ready but then yesterday Micah left a couple comments and I think the only one really that still has to be resolved is I think people disagree around this one constant which is the minimum data guess per block. So I think basically there are three different positions. The original intent of introducing the constant, I think that came out of some conversations that Vitalik and I had was just to have it basically normally non-binding lower bound but basically the idea would be that this would only ever be relevant in case A, in the very initial phase of the API going live there's not yet demand. So like a one-time kind of just a floor or if there ever was some network difficulty or something for some reason for a while there couldn't be any blobs. So the price would fall that it doesn't fall all the way down to zero or one basically like right now the normal 59 basically can go all the way down to seven way. And then whenever the network would recover it just takes a while to ramp back up to normal levels. And of course, of course you have subfloor that normally is irrelevant but like this hit whenever these conditions happen that's a bit higher than you kind of your ramp back up is faster and that is somewhat relevant because the ramp up period of course means that you have like a sustained period of just double X network load. Because you always hit the limit not the target. And we're talking here like an order of an additional 30 minutes or so. So I think right now basically the ramp up with the constant would be something like 15 to 20 minutes coming from all the way to the floor to some reasonable price level without the floor if you go all the way down to one basically would be something more like 45 to 50 minutes. Then there are some people who actually want to make this be like a really high floor that is actually binding and kind of has some opinionated approach of trying to prevent spam. I personally am very strongly opposed to that just because I think we shouldn't be opinionated what constitutes permissible doesn't. And then there are people like Michael very strongly want this constant to be one because they think anything other than one is basically invalid opinion by the protocol. So anyway, this is kind of like one niche last conflict to be resolved I think once that is resolved the PR should be ready to merge. I guess does anyone on the call have a strong opinion about where this should resolve? So my feeling is this basically so generally I mean, I would agree with first I think it's very unlikely in its current form to be a binding constant like even very soon after the whole thing is launched just because I think there are like just stupid applications that don't even have to do with blockchains that we would use these. Like this is cheaper than my roaming data fees. So like that's be honest, it's nothing. Okay, so the reason and so I think it's unlikely to be really like a problem with spam and practice. The reason why it makes a little bit of sense to like be opinionated in this case. So where would contradict Ansgar is that so in a way we are actually subsidizing something here, right? So we are introducing this new thing and it is we don't have a way to scale it just yet. And so like I think like at initially it's very likely that essentially the rest of the Ethereum network is subsidizing this new thing that we're introducing in order to get rollups off the ground. So that's why I think it's actually not crazy to just add a lower bounds to that and the kind of lower bounds that we're discussing are basically still very, very cheap. Yeah, another way to see this by the way is that rollups will in practice actually have much higher costs to use this. Like if you look at the pre-computation or as the pre-compile it's 50,000 gas at the moment. So in practice that will be a lower bound at least for ZK rollups that will be much higher than this. And so you could also argue that it's kind of unfair that applications that just wanna file share on the Ethereum blockchain or I don't know or store the JPEX for NFTs on the Ethereum blockchain will have this super cheap data because they don't actually need to connect it to the actual blockchain in any way. I'm not super, I don't have a super strong opinion on this. I would prefer raising the current lower limit by about 100X, which would still make it very low in my opinion. And it basically, yeah, would remove some loud people on Twitter, which I think shouldn't be like our major concern, but I think it's a relatively trivial change. I don't see any downsides. Henskar, let's see, I have your hand up again. All right, two things, yeah, I don't love that approach and personally I would prefer and I think there's broad support for lowering the target and maximum amount of blobs or in the case of the updated fee market, basically the target and maximum amount of data gas. And I think that makes a lot of sense because they're starting with even something as low as basically one, two or two, four as target max for blobs and for the first rollout of the AP. I think that makes a lot of sense and that already basically very much limits any impact of potential spend but in an opinionated way. So I would much prefer that. But as a more kind of like a how to move forward from this. So I'm wondering what we could do is I could update the PR to set this constant to one, insert the PR and then we could have a separate PR changing that and have the discussion over there so that we would remove it as a blocker for the fee market PR, would that make sense? Yeah, and by one you mean one way, right? That's right, yeah. But like just again, as basically something we're like, okay, now if we want to actually set it to something that would be an opinionated choice to be made in a different location. I would, I think that makes sense. And the thing with one does the, because with 1559 we have this weird integer match which means if it gets below seven, it can't go back up. Do we have the problem with that here? There's no problem with that here. Okay. No, we don't because yeah, because we have the accumulating. Oh, right. So that they can keep accumulating but at some point it will start impacting. Okay. Yeah, I think I would move to do that just because this PR has been open for like over a month if we can just take out like all the contentious bits move into another PR. There's like 300 comments by Micah that are just like descriptions. So I think probably makes sense to just, you know, to resolve those but the one with the actual consent gets decided we can just have a different PR and discuss there. Sounds good to me. Let me press one of that. Anything else on the few market PR? Just a quick question on that. It sounds like nothing has a substance has changed since the DevNet was released at least or is there something where the changes made since, I just wrote Roberto by the way, I left the comments saying, you know, when I put in the changes I'm wondering if there's been so much back and forth I've lost track of whether there were any substantive changes since we last implemented it. No, I think I pinged you right, when the first DevNet with it came out there were like some last minute changes before then but that was before DevCon. Since then there have been no substantive changes and then none planned. Okay, great, perfect. And it sounds like we're up to date. Thank you. Cool. All right, sweet. So yeah, let's split this one out into and merge the actual fee market bit and we can deal with the minimum fee separately. Okay, next one, Vitalik had this comment about adding a modulus opcode or modifying the pre-compiles such as we can read we can take the modulus as an argument. He's not here, but I'm curious if anyone has strong opinions on that and if that's something we should be adding to the spec now because if so, we probably want to notice sooner rather than later. This is probably a question that's best for the L2 developers, I don't know if we have many of them on the call today. Right. Yeah, as I understand it this is more of a user experience improvement and not something that's really necessary for at least L2s in particular. We can deal like we can just keep track of what the current modulus is across upgrades of 4044 in the future and it's not necessary, but it is a nice to have. I guess it's like my take on it. Would it introduce an extra trust assumption where basically now it would have to be part of L2 governance to update the modulus they use because if not they could falsify proofs? Yeah. Sort of like we already do have specifications that our user rely on, particularly to make challenges. So it would just be part of that. Whenever L1 makes changes to this we would make the corresponding change in our specifications and then users can... Yeah, so in a sense, yeah it would be part of our governance process. I don't think that's a bad thing or it's a blocker in any case. And is this basically used in the same way by Optimistic and ZK-Rollups or is there like a difference in need or requirements on this front? Wouldn't you need this though if you wanted to eventually freeze updating of your contracts? Is it so many, Mofir, are you always gonna be in a state where you'll be able to kind of make these changes whenever there's an update to say the version hash? Ideally we should, but it'll be much easier if L1 just handled that. Yeah, I mean like that's a nice thing about being able to access some modules from L1. If we can do that then we could at some point have rollups that don't need any upgradeability functionality but can add that update to a new version hash which would be really cool. How, is it possible to add this in the next hard fork? So say we implement 4.4 as is, can we, yeah, can we? But there's also, there would also be a very simple change to 4.8.4.4, which is to simply make the point evaluation pre-compile return the modules in addition to the result. Or take the image parameter and fail. Or that, yes. Well then, yeah, I guess you could do that. But then you would also need, you always, like where does the contract that calls it get that value? So if it returns it, yeah, if it returns it on like a successful evaluation, that seems like a small change now. And I assume that's not something we could do in a separate hard fork. Like, basically, yes, yeah, because you don't, it would be good to do it now, I guess. Yeah, I don't know, is this like literally like a one-line change in the, I don't have a feeling for like, is this a value that's already being used as part of evaluating the pre-compile or is there more work to like expose it as the output? It is already being used in the pre-compile. Yeah. So that is just a question of like exposing that as a return value while you're making that call, is that right? Yeah. Yeah. And just to briefly come back to kind of the question of whether to take it in or return it though. Conceptually, it seems cleaner to me to have it in as an input value because the point evaluation pre-compile already takes in an external proof that has to be provided from outside. So basically it would just, in a way it was, would just extend the proof format to also include the modulus. Correct. But it would mean that the proof size goes up from 48 to 80 bytes. So is that, I mean, is that a good idea? That seems like, why? Like it just seems like some useless data that you have to pass in now every time. Isn't it the same as if it is returned? Well, for returns it, I mean, it's just some value in your memory, right? Like if you don't do anything with it now, I assume that 90% of contracts for now will not care. This is really for the hardcore. I want to make none upgradeable contracts which will probably come even like in one or two years at the earliest. And only they would actually read this value for memory and the others can simply ignore it, right? Like you don't have to do anything just because some value is in memory. So it seems like this is less invasive as a change. Of course, it would be an argument to be made that we kind of want to encourage trustless architectures. So making it more explicit, right? But yeah, maybe that's true. But adding extra costs also seems like a stupid thing. I mean, what you're suggesting adds an extra cost for everyone. No, but that's not right, right? Because it's my contracts that just trust that could start code. Like they don't want to be put in future proof to just hard code the value and then just upgrade that to my contract. For the others that adds that cost. But if they return it, then they can simply use the return value and feed that to their zero noise proof verification. No, but you still need from the outside someone to basically say, we expected the modulus to be this. No. Can you check whether? Well, we do currently have like assertions that each point provided to the pre-compile like fits in the modulus. So either way. Yes, that's a different thing. That's independent. No, I'm contradicting what Ansgar says. No, you do not need some external oracle telling you this modulus. Like it's simply, yes, you do need it for the proof, but the contract would only get a proof would, well, I mean a proof for the pre-compile is in order to prove for the rollup state update. And they would feed that modulus into the witness, sorry, not the witness, the public inputs for the rollup update. And so if you get it from the contract, then no, you never need to pass it in from the outside. The stator does not need to get into the, go to the call data. Okay, yeah, then I. So yes, what I'm suggesting is a real efficiency improvement. I mean, maybe Tanya, I don't know what the other costs are, they are very likely to be much larger, but anyway. So I guess for, say we went with that, is it worth it for somebody to draft the PR to the EIP about like basically what it would look like in the EIP and like the how also L2s would use it. And then we can discuss like the PR async and make a decision in the next week or two, if we want to merge it, but it seems like clearly this is at least worth considering. And I think if we had a specific PR against the EIP, we can share it like not just what optimism but with the other L2 teams and just get feedback on that before we merge it. Yes, let's do that. Yeah. Dan Crab, the end scarred is either of you have the bandwidth to like do that. End scar has a thumbs up. Nice. That was just for the idea, but I... Oh. I can't. Well, I'm not so sure. I'm tuned in very much to give a lot of motivation. So I can do the actual spec change, but on the motivation side, I'm not super tuned in. Dan Crab, can you help with that? Yes, sure. Okay. Okay, sweet. So Dan Crab and End Scar and then yeah, we can just put it in front of the different L2 teams, get some feedback on it. And if it's a small change, then we can include it in the next couple of weeks. Sweet. Okay, next one, Terrence had an update on the sync specs. So basically the idea I believe we discussed in DevCon was we couple blobs and blocks for kind of gossip and quote unquote, recent sync, but then we decoupled them for historical sync. Yeah, I see there's some conversations on the PR. I think at DevCon everyone was pretty much on the same page here. Any other thoughts, comments on this? Well, so the one thing with that I was still curious about is whether we're still planning to sign the blob sidecar. I think that was a little up in the air. Like if we're gonna gossip them together, then RPC, we don't necessarily need to do the signature verification. And if the blocks have references to blobs inside them, then the signature of the block wouldn't theory be enough maybe. Proto? So I did discuss this change with Danny and others.com. It's indeed, we can do it either way. We don't need the signature. It's mostly a performance thing where verifying the signature might be cheaper than verifying the commitments. And so if we don't want to allow spammers to like give us lots of different data, maybe it's better to have a signature to check first before then verifying the commitments. But I want to like hear some numbers from people that benchmark this type of thing. Otherwise I'd rather just simplify the protocol and remove the signature. And how should we benchmark this exactly? I think someone here working on libraries might already have numbers on, like just a cost comparison of verifying the commitments that come with a blob sidecar, versus just verifying a single BLS signature. I think it's strongly in favor of the BLS signature there. I'm just not sure if it's significant enough like to be a kind of dose factor, or if you want it one way or the other. Good. And I guess, yeah, we want to keep this PR open until we have that. Okay, and yeah, I see you basically have a comment, Fred, about that in the PR. And Terence was saying he doesn't think we need the signature for the sidecars, right? Right, we can verify the sidecar, matches the beacon block itself just by verifying the commitments. Yeah. The commitment verification cost is the main concern. On the execution client side, I think we got comfortable with doing that signature verification, or the commitment verification. It was something like three to four milliseconds per commitment verification. Is there any reason why we'd expect it to be different on the consensus side? I just think you repeat the question. Oh, I was just saying on the execution layer, we got comfortable from a DOS perspective doing the KDG commitment verification. I'm just wondering whether there's anything different about this on the consensus side that would make us get to a different answer in terms of instituting the BLS signature as a workaround. Basically, so for execution, we're one with the approach just announcing transaction hashes. So the client has the, I guess the client is free to pull them whenever. And if it becomes like a DOS issue, they can start disconnecting peers. In consensus, we don't have this luxury because we couple the sidecar with the beacon block. And if we think the beacon block is valid and the proposer is valid and whoever's sending them are valid, then we would always, like in the worst case, always attempt to verify the sidecar. So we don't get to choose or at least we're not as flexible as execution to choose to whether or not to verify sidecars or not. I mean, it's still attributable to the pair if the sidecar that comes with the beacon block cannot be matched against the beacon block. So it's probably fine to remove the signature and just rely on the commitment check. Yeah. Okay. Can someone just comment on the PR after this call so Terence knows like of this discussion and can probably make the changes on it? I think the PR already removed it. So we just continue with PR SS. Okay. Anything else? We wanted to change or discuss on a PR specifically. Okay. Yeah. Well, I did have like a comment on the PR and how exactly we want I wanna say how exactly we want to sort of like gossip those the couple blocks and sidecars. Like what are we doing with the old topic? Is it gonna be deprecated from now on or is it still gonna be used? It wasn't quite clear to me how we go forward from that this. I left a comment on the PR. So I'll discuss that as well. I don't think we should keep the old topic around with only the blocks or only the big blocks. We should just have one topic. Little old reasons to couple dilemmas for consistency by creating more topics we might create or we keep the case case that we wanted to get rid of. Yeah. That makes sense to me. Well, I had another question. I'm not sure if there is a comment about this on the PR but since we're gonna couple blocks and gossip then are we thinking of like more tightly coupling the blocks and blobs in like the execution engine beacon node API? Because like obviously you don't need both to broadcast anything. With the engine API, there's an expectation that the engine always has the blobs that match the block that is being produced. So even if there's an inconsistency it should be trivial to get the blobs where some appeared to bear, you might have an entirely different mesh of bear as per gossip topic. So you don't have this guarantee that the data is there. We could call them but we don't really have to. And for now, I think it's nice to just stay compatible with the existing API instead of introducing a new version of the methods. Yeah. Anything else on this PR? Okay. And then Terence had another one basically proposing 18 days to keep the blobs based on some conversations we had at DEF CON like it seemed like two weeks was the upper bound everyone felt comfortable with. I suspect 18 days maps. Yeah. It maps to a neat number of epochs. Yeah. Does anyone have a strong opinion about this or disagree? At the very least does anyone think it should be longer? I know some people were arguing for it even shorter but I think I haven't heard anyone argue that it should be longer. So if not, I think we can probably just merge this change and if we want to make it even shorter we can do that in the future change. Okay. I will take silence as a yes. So all of Terence, no. And then the last, oh, sorry, yeah. Sorry, yeah. I was just generally wondering about like what's like why 18 days I guess? Like is it supposed to just encapsulate like the longest fraud-proof periods we might imagine or is it supposed to like set, like it'd be like a precursor to how long you need to retain data and full sharding for like customers or something like that or like is it supposed to be longer than the weeks of activity period or? Right. As I understand it, it's the longest fault-proof and say you were like not online you had to like sink a new node from scratch and you wanted to participate or retrieve some data. Two weeks felt like that. And then the other thing was as well, say there was a weird consensus issue that happened on mainnet. And for some reason to resolve this consensus issue we wanted to have blobs still live on the peer-to-peer network. Two weeks is the period where we generally think we can solve pretty much any issue on Ethereum. So it gives us like some room there. Yeah. All right. Cool, thanks. Yeah, but it's a very, yeah, it's a very soft metric. There's not like a hard requirement, yeah. Yeah. Yeah. Okay, but yeah, I think we can all agree 18 days as an upper bound, we can change the spec of that. And yeah, go from there. And in the storage requirement, like Prism said at one meg targets, this would be 137 gigs. It's extremely likely that we have a target that's like well below that. So less than 100 gigs, I would say. Sweet. Okay, so those are all the actual spec changes for 4.44, oh, sorry. Just going back up one on the coupling beacon box and bobs, is there like an action item there? Just don't wanna follow up with Terrence just to like give him a nudge that we discussed and decided on it. Yeah, that was what I was planning to do. Okay, yeah, I'll let him know. And then I'll send him this recording as well. So he can have the context of the conversation. I think he's probably the best person to just move those PRs forward. Yeah. Sweet, okay, so the next one, basically this one's like a bit tricky where the idea of like how do we rebase 4.44 on Capella? Tim, do we miss the, do we wanna talk about the cryptography? Oh, sorry, yes, actually yes, I didn't miss that one, yeah, yeah, you're right. The, okay, yeah, this is George's PR about the cryptography API. Yeah, sorry. So I saw, yeah, there were some reviews on that one. I think Roberto, you added it to the agenda and you had some questions, you're seeing now the PR. So yeah, do you wanna take him in this to walk us through where they're at? Yeah, I don't know if I had any particular opinions on it. I just wanted to make sure it was, everyone was aware of it, it was non-contentious. It looks fairly straightforward to me. Dan Cradd, I see you were part of the reviews a few days ago. Sure, sorry, what's the question? I guess anything outstanding on this, the PR by George about the cryptography API update in the Fiat Chimere logic? So I believe it, well, okay, so it is ready. There's one very small question on whether we need domain separators and Dimitri will still look into that, but the change for that will be trivial and actually they are already placeholder variables for that, so it would simply assigning value to these placeholders. And even if libraries don't implement them as they are, which is a change is still trivial. So I think we should merge that PR and then maybe, yeah, like Dimitri will still tell us whether he thinks we should add those. Okay, sweet. And I guess, all right, and I guess, yeah, Kev was also having a look, so opinion Kev, making sure, I don't think Kev is here, right? Yeah, no, so, opinion Kev, oh yeah, he, oh, you are, sorry, yeah, yeah, yeah. Yeah, yeah, I agree with Dan Cradd, it's just the domain separators that are the main problem. And I guess, do we expect to like have an update on that in the next few days, or is it something we're gonna need more time to determine? I don't know about the next few days. I'd have to ask Dan Cradd about Dimitri's availability. Yeah. But yeah, it's not a blocker to the actual PR. And it's only a blocker if we want test vectors, fix test vectors. Right, and I guess, yeah, and the reason I'm asking is just like, if there's client teams that are looking into implementing it, should they just basically look at this PR or assume it to be part of the spec effectively? And it seems like we're saying yes. Yeah, exactly, but there are CKZG and a few libraries around that sort of just implement it and client teams just need to look at the public-facing API, which won't change. Okay, okay, that's it, okay, perfect. So, okay, so for the client teams' perspectives, it's abstracted by the library and then the work to implement this would be in CKZG, basically. Right. Oh, sweet. Anything else on this PR? Is there a person who's gonna own merging that? Yeah, but by the way, as of now, I don't think all that logic is in GoKGZ. I think a lot of it is in the clients. We're moving pieces of it over little by little, but it's not quite there yet. Yeah, I forked your Go Ethereum branch and I was modifying the API. I just haven't pushed the PR yet. I pushed one to Prism in Fees branch, but not to yours yet. Proto, sorry, Roberto, do you wanna go and then Proto? Okay, I mean, I'll have to take a look at that PR. I haven't seen it yet, but that's great. Proto, yeah. I do think GoKZG has the necessary methods that are nicely grouped together as here. We merged the PR that upstreams the effectivates polynomial in effectivation form. So I think we're complete now. Maybe I'm missing another method. I'll take a look. Okay, if we're missing anything, I'll let you know. There was an actual issue that I opened up yesterday that I think GoKZG needs to basically check that, remove the check that field elements are canonical. If I think the issue is 3057, so I think that's the only thing. Yeah. Is there any particular reason why we need to remove validation? Yeah, so if I remember correctly, Murphy said that when a blob is canonical, it's not up to the cryptography to decide whether a blob should be canonical. It's up to the person that's encoding the data. Specifically, the data is out of range. If the bias has a value that is not fit in a field element, shouldn't it just be in front? Yeah, that's the point I was trying to make. The data is allowed to be out of range. It's whatever the output of the encoding that should be basically fit in the field element. And Kev's point was that there are cases where you could have different data mapping, mapped to the same encoding. Well, I guess it all depends on the coding. In that scenario, I would argue that the coding is valid and useless, any correct. But the point was the cryptography shouldn't be able to verify that the data was encoded correctly because that's all up to the user. We can still have the field element checks to make sure that it fits within the modules, but the data itself, it's not something we can check because it's already encoded, if that makes sense. So I would say that if the user wants specific data to be valid with respect to the crypto functions, then they can just apply the modulus or cut off the bytes that are the bits out of bounds. It's like, there are many ways to map like data to some point in a specific integer range. I'm not sure if we should create the expectation that people can just encode data, however they like outside of this range and then still compute commitments over it. Was that, did that make sense, Kev? I'll just make sure that we're in the same area. Yeah, I understood it as Proto was agreeing with what I was saying, but you agreed as well, so I'm a bit unsure, but we can take it offline. I can post the link to the issue as well here. Okay, anything else on the PR? Okay, I think that was it for all the proper spec changes. We have this draft PR by Mofi about rebasing 4.44 on Capella. And I guess the background here is that the current DevNet's kind of implement 4.44 over Bellatrix. And I think the sort of rough consensus we had reached was that it makes more sense to have this rebase on top of Capella instead. And, but then that means it might be tricky to do some of the testing while Capella is not fully implemented in all of the clients. So yeah, Mofi, do you wanna take a minute and walk through like what your PR proposes? Yeah, it really doesn't propose much anymore. The original draft basically rebased 4.44 on Capella, but added introduced a feature flag to disable Capella specific state transitions. Based on feedback in the PR, I think we shouldn't try to enshrine such flags in the spec. So the latest revision basically removes that. It's simply a rebase on Capella, but there is a section at the bottom of the spec for testing that outlines the necessary functions we wanna stub out for EIP 4.44 specific testing. That way, we retain the Capella containers and concepts, but no withdrawals or withdrawal interaction actually happens when testing EIP 4.44. Got it. And so that basically means that Capella is a sort of like no-up fork that happens and the clients run through it, but then yeah, they don't have to have the withdrawals implemented in order to test the 4.44 changes. Is that right? Yeah. Okay. I'm still not sure if, I guess it depends on like client teams implementation and how their code bases are structured. I'm not 100% sure if this would be feasible depending on how like client teams implement those functions that I've outlined to be stubbed. Right. Also, this doesn't solve the issue of, yeah, I don't know. I guess we'll find out during testing. Yeah. So I guess, yeah. There's a couple of client team contributors on the call here. I think if you all can review this PR, that would be great. And I think we should definitely bring it up on the CL call next week as well, so that it gets the attention of like all the different client teams there if it hasn't in the meantime. Yeah. I'm from White House. I would generally say that like we really support this because having like the consensus objects in the YIP 4844 fork look like what they're actually gonna look like will like on net reduce a lot of work for us because otherwise if we have like a 4844 without with drills like fork, I guess in a just Capella fork without 4844. And then like this new set of consensus types that might be the most like accurate in the future. This sort of equates to us having to support three different forks. So if like, if we know what drills are gonna be included, then this change, yeah, reduces work for us by like eliminating a fork essentially. Yeah. Okay. I think for Techie we're not opinionated. We do all of our development on main behind feature flags. So we don't have fork issues. So either way is fine on that. Sweet. And then Dan, you said, yeah, you've managed to implement it this way on RoadStar and Terrence is aware of this because he's commented on a PR. So maybe what's the other client? Yeah, just maybe getting in this to have a look at this would be good. And then, yeah, it seems like if Nimbus doesn't have like a strong objection to this, we could probably move forward. Sweet. Anything else on this? And I guess, yeah, the assumption here is like any interop testing and whatnot we would do moving forward would likely be, you know, using this format of like a stub Capella, which is different than what we've used for the DevNets historically. Okay. Okay, next up, yes. One thing I just wanted to check it on. So for the KGG libraries, we obviously we have CKGG, there's bindings being developed and go. Are we, is anyone aware of like a client team that for whatever reason can't use any of the libraries that are being developed or that needs, you know, specific bindings or something that like does not exist for them yet? I don't think Rust bindings exist yet, but I know there's a couple of people interested in working on them. Okay. So Rust bindings for KGG library. Likewise, I think open name. Oh, sorry. I was saying, I think the same situation for NIM. Right. So NIM and Rust both need bindings for the KGG library. And there's a question for Nethermine. Oh, okay. So there's progress as well on a .NET, but it's not there yet. Okay. So we have .NET in progress, Rust and NIM missing, but then every other client team is fine. Yeah. On the .NET side, we just need to update according to the new PR, which includes simplification of the API and we will integrate like that. We do not use old API version. We want to use the new one. Sweet. Anything else on the KGG libraries? And I guess, yeah, we can figure out offline for the Rust and NIM bus implementation how to best get those done, unless someone has wants to volunteer for them here. Oh, and it seems there is a Rust library in progress. Yeah. Okay. I guess next thing I want to cover, like there's a bunch of different client teams here. I'm curious to hear just generally where teams are at with their implementation and if they have any blockers or things that, you know, they think everyone else should be aware of. Yeah. I guess I'll go from the order I see on the video. Dan, I see you've started like a load star for requests. That's right. Yeah, it's all going fine in load star. There's a bit of a question about how we're going to use CKZG. We need to generate TypeScript bindings, but I'll figure that out when I get there. And basically up to the networking. So I've all the types and params and config in there. Hopefully that I can get the networking and then the blob verification done this week. Okay, just going through the list. Ben, any updates from the take-who-side? Yeah, we've barely started on this yet. In the next couple of weeks, we'll get off the ground, but yep, it shouldn't be a huge lift, I think. Mofi. Not much work since the DevNet, which is still based on Bellatrix structs. Actually, Terrence has a branch on EIP 4044, and I think we'd want to start moving development to his branch, because it contains like the latest Capella structures, and basically it's more in sync with the present upstream. Nice. Sean? Yeah, so since DevCon, we've mostly been focusing on essentially Mofi's PR to Bellatrix 4844 on Capella. And then Pawan's also been working on unifying the gossip topics. And I think we're pretty far along, so we're hoping to join the next test net with like the Bellatrix structs as well. That's better for us. Cool. Jerry? No, we didn't start working on it yet. I'm assuming after today's planning, we'll start. Sounds good. So just, oh, we didn't hear you. You came off mute, but we didn't hear anything. Sorry, sorry. So yeah, I was mostly working on guiding advice to work on that KZG library. So I would say we probably will not start much on the client side until we have a bit more work done on the KZG library. I think so we'll join a bit later. Got it. Alexei? Yeah, we have work in progress. It's quite important for us to have some milestone. I mean, what should be on DevNet 3 like that and we will align to that and the idea is to join this network and provide some functionality, but I'm not aware of what certain peers will be included. What do you want, guys, to see there like that? Yeah, I think, yeah, I was gonna sort of finish up with the third DevNet. I feel like next week, we should probably spend most of the call on that. Like this week, there's I think a bunch of open issues that we need to clean up and get merged into specs. And then I think next week's call, we should be able to say like, hey, this is basically the scope for the next DevNet. Does that make sense? I suspect every team has at least one week of work to get at least like the basic full implementation and rebase on KITALA. Oh, okay, this rebase is a bit disturbing for me just because it's new for me because we have withdrawals implemented like part of Shanghai for as far as I remember and probably if we will rebase, it will cause some issues for us, maybe. I need to check. Yeah, so I think if teams should definitely look at like rebasing all the 4-4-4 stuff on top of Shanghai or Capella. And then, you know, all the PRs we kind of discussed today but then I think next week will be in a spot where we have like, we're able to more crisply define these are the sets of things we want to hit. If that makes sense. Great, yeah. Sweet. Roberto? Yeah, I've just started looking into Erdogan. I'm gonna be working on that through the next week. Nice. I haven't made a lot of progress yet though. Nice. And I think Marius, you're the other one. Sorry, I didn't hear you. Oh, sorry, I think Marius is the only other person working on a client implementation now. Oh, are you still here? Marius, connecting to audio. Yes. Hello. We're working on a bunch of other stuff at the moment. So we don't really have a stable withdrawal branch that we could rebase on top. I'm kind of, today I started looking into the Go bindings for CKZG. That's going okay. There's a bunch of issues with it right now. But yeah, so regarding withdrawals, as I said, we don't have the normal withdrawal functionality in the code yet. We have a branch, but I'm not sure how far along that one is, so. Got it. Anyone else working on a client implementation that I missed? Okay, we have about a minute to go, but I wanted to make sure we also covered this. One of the big things we're working on for testing is this idea of having a large blocks that we send full of call data on the network and see how the network handles those as a way to gauge our blobs viable from a peer-to-peer perspective. Dan, do you want to take a minute or two and talk through where things are at there and what are the next steps? Yeah. Sorry, for the big block experiment? Yes. Yeah, so we're just trying to figure out the logistics of actually how we would submit, how do we actually fill up these blocks with two megabytes of data? So I think we've found there's some issues with memorable propagation if you try to submit a really large transaction. I'm thinking maybe try MEV Boost, but the issue is that that's extra integration work and not all validators run MEV Boost. So I think the latest suggestion was from Tim to try to submit to validators directly. So I don't know exactly how we would do that, but yeah, just literally figuring out how to actually make sure we get our transactions are the ones that are get picked and trying to get as close to the full two megabyte block limit as possible consistently through in a row. Right. And then, yeah, Maristus might be a good question for you. So get limits, how big the transactions can be gossiped in the transaction pool? Yes. So I don't know if we actually... Yeah, Matt's confirmed this morning. I think it's 128K. Yeah, we can patch that. Is there a way? The definition? Yes. No, so if we want to do this on mainnet, so we want to submit big blocks on mainnet, yeah. You want to submit big blocks on mainnet? With call data, yeah. Yeah, no. We can submit big blocks on our local network, but we cannot do this on mainnet. Like, this safety feature is there for a reason and like, removing it would be... Yeah, I'm not saying we should remove it from yet, but I guess, yeah, and we can take this offline, but basically, Starkware did it a few years ago. How did they do it would be the question that I have. Well, they need to control, you need to control the block producers. Right. Okay. But, okay, so that's the way. It's like, basically, send them through either flashbots or through like some staking pool or something like that. I mean, you're saying one block producer, right? Well, it depends how, you know... How many you want to do that. Sure, sure, I see. So wait, so, okay, so what is the gas block limit? It's like 128K and we would like... Gas does not produce blocks larger than 128K. Transactions. So you could do it by sending you a bunch of 128K transactions. Yeah, why not do that? It doesn't guarantee they all get included in the same block, right? Well, I don't guarantee that, but if you bid a bit more for gas, then it should be, yeah. Yeah, that might be the... That might be the simplest way. Yeah, as it are, and then we can probably wrap up. Right, I just wanted to say that, given that we haven't yet had any post-merge load testing in general, I think we should be careful, maybe not immediately start with some megabyte loads, but kind of slowly ramp up to this. So just in case we notice that, you know, even at 500K, the network's already struggling. Right, yes. There should be a moment to stop. So I think there's a big... Well, I mean, I don't see much of a scenario where like this would cause permanent damage to the network. Well, it might just in case, just because attackers could just watch this and be like, oh, it's easy to actually bring down the Ethereum network, you know? And it does cause permanent damage because it increases the history. And as long as we've... Look, we're talking about a few megabytes. We're not going to add gigabytes to the history. No, this is not a valid argument. So I don't... I think it's not... I mean, yeah, you could flag that up to someone, sure. But like, I mean, honestly, someone who's... I don't know. Anyway, we need to be resistant to that attack. So I don't think we have to be that careful. I'm pretty sure. Well, but I mean, even temporary network instability, you know? Like, I think we all kind of have pretty high reliability standards for Mainnet. And so I don't know. I just... I personally would be much more comfortable if it was like a multi-step... If it takes the one megabyte blocks to bring it down, then we're already not satisfying that standard, in my opinion. So... Yeah, I think at the very least, we can just use vanilla geth like the transaction process. One megabyte blocks will already bring a bunch of validators down. That's just how it is. Like, continued one megabyte blocks, there are a lot of validators that don't have the bandwidth required for this. And they... I mean, what's a lot? Sorry? What's a lot? Like, 1% or even 5%, it does not impact network quality. Like, let's just be honest, it's not a problem. Fine, they miss some other stations. Who cares? It's nothing. They lose a cent per attestation. Look, it's not a concern. Like, mainnet is not going to go down because of this, just because we've designed it so that 30% can go offline without anything happening. Sure, but if we can minimize however much we take offline... I don't care why we need to start testing this on mainnet. Oh, we don't start... Oh, yeah, we won't start on mainnet. No, for sure. Like, of course, we can do this, we can do this test in like a month on mainnet. I don't care about this. But like, we shouldn't... Yeah, but test nets are not going to tell you anything interesting because everyone runs their test net nodes. Well, you would hope that's the thing. So if this works smoothly on test nets, then you can move the mainnet. But if we break something on a test net, it's much better to have broken it on a test net first. So I think... I feel people are being over cautious here, but yeah. I mean, I think, yeah, we're already over time. My feeling is like just gradually ramping up the size of things we do seems to be like the best way to go. And like, we probably don't have to deal with this, like being bigger than the get mem pool transaction cap for now. Even when we move to mainnet, we can probably do like a first test with something like a bunch of 128K transactions with like a relatively high priority fee and hope that most of them get in the same blocks. Perry, talking about with metrics, we want to track... Does it make sense to just move that to next week as well? Because I think... I don't think that... Well, first we're sort of out of time, but... Yeah, I don't know. Yeah, do you want to take two minutes and maybe talk about that? Yeah, I don't know. Yeah, do you want to take two minutes and maybe talk about it? But I suspect to get a full list we'll probably need the chat, I think. We can do anything then, that's fine. Okay, yeah. So I guess, yeah, let's, in the next week, discuss this in the telegram group about this. If anyone wants to be part of the telegram group talking about this experiment, reach out to me, I'll add you. And then I think next week for this call, if we can have the issues that we talked about mostly resolved, a cleaner spec or target for DevNet 3, and then a sort of set of metrics to target for the dis-experiments, that would be really good. Anything else before we wrap up? Okay, yeah. Thanks, everyone. Appreciate saving on a couple extra minutes and talk to you all soon. Thanks, everyone. Have a great day. All right, thanks, Tim. Bye.