 OK, that's that it. Welcome everyone to 4044 implementer's call. Roberto, just confirm you're going to be taking notes this time. That sound good? Yes, thank you for reminding me. Awesome. For folks who want to follow along on the notes, you can hear. I'm going to be writing a call because Tim's on a plane. Jesse, I'm going to be working for the last six months. Before we dive into the agenda, I wanted to just quickly check and see there's a few outstanding action items that I wasn't sure whether we followed through on, and they are these for the next you just share my screen. I made progress on, so I'm going to check them off, but let me know if otherwise. OK, let's dive in. So we're going to be talking about the spec updates. Ansgar, you mentioned that you had to jump on a flight. So why don't we start with your spec update? I think the first one that's on the list is the modulus one. EIP 5864. Right. So that one basically, I think it's just ready too much. And maybe, for example, I would want to have just one more center to look. It ended up being slightly modified. I'm not sure if we talked about this last week to also return the degree, polynomial degree. And I have a personal, like small question mark there, just a best practices kind of concatenating the values. Right now, the module will show me basically an A side value right now. It's not added. I think that makes sense. I also think it makes sense that the leading and then the value of kind of following as a small things again, in case you want to use basically. I think we're having a hard time hearing you because of the background noise. Maybe try talking a little bit close to mic a little slower. OK, OK. I'll let me, maybe someone else go ahead and I find a better location. OK, sounds good. Let's come back to the Onsguard one, then he's going to do it. It looks like there are three specs that we merged. The first one is the get Blobs v1 change to the execution API. Any additional context that folks want to share on that one? I guess, yeah, that was a proto one. Proto, any additional color to add there? No additional color. I think that the execution API is updated. So please update the implementations too, if you haven't already. Awesome. Any questions on that or folks have questions on the execution API? Yeah, just one question about this pre-compile. The execution spec includes some lines about verification of values past to this pre-compile. And we do not verify x and y values there. I mean, when we verify Blobs, but we do verify it in this pre-compile. Can we delegate it to a cryptography library instead of validating it? On execution site, client side. And just to make sure I understand, Alex, I think you're asking about the execution specs rather than the execution APIs. Is that right? Or the engine APIs? Yeah, it's more about point evaluation pre-compiled specification. Yes. Okay. I think that's the intent is that we do want to move everything in a go KZG. But you're correct right now. There's some implementations that are pieces of the implementation remain in the clients. That's on the to-do list. Alex, just so we can fully understand what's the specific thing you're asking about? Yeah, let me rephrase it like that. So point evaluation pre-compile is described in the specification. It includes just two lines I wanted to discuss. We are requested to verify that X and Y values, inputs of this pre-compile, need to be verified, being less than models, right? Should we move it to cryptography side and do not verify like in execution client directly? Right, I think I see what you mean. So the way I see it is that like the pre-compile code kind of guides you into that X and Y are scalar elements and hence need to be smaller than the modulus. But you're right that most likely in your implementation you're going to like, you know, wrap it into a type that does the modulus check inside of it. Now you're saying whether we want to move this entire thing into the KZG library as in that the library should receive like integers or bytes or something and do the pre-compile itself? Yeah, you read my mind exactly. Right, okay, I don't have, we can, if you think that is better for you because you're managing less cryptographic burden on your side, we should consider it. You're right that there is some leaky leaks like abstraction leaks there. Oh, okay, thanks. I mean the point evaluation, sorry, like here's a jeep proof. Pretty much takes bytes, right? Like the API. Does it? Does it not? You mean the KZG? Okay, well, maybe not for this call. I can make a note about this, about like, you know, figuring out the interface of the pre-compile for client devs and work on it offline so that we don't hog the call. Yep, thanks. Yeah, feel free to bring this up in the execution implementation, the implementation tracker. I did have a chat with Proto just yesterday in fact about some of the stuff being moved in to KZG. So we can probably align over there. Just to make sure, just to make sure I fully understand what the outcome of that back and forth was between George and I say, is that, do we think we want to make a change to the KZG interfaces and the interfaces that we defined in consensus specs? Actually, it's not like required because we already have like byte arrays based interface. I'm not sure that we need to additionally verify this modulus constraint for X and Y because probably it will be done on cryptography side too. I mean, cryptography library side too. So we do this twice and we need to un-martial this values. I mean, to transform it to integers to compare with this modulus. And if we could just pass bytes to the cryptography implementation, so we could save some like execution time, I don't know, and write that code. Yeah, it could be much more obvious if we could check this specification. It includes this checks for modulus and I just don't see we need it there. Gotcha. Yeah, this is a fair request on presumably specifically the verified KZG proof could just check the modulus is within bounds. The only like not really a major concern is like those assertions let us fail out early if the user provides like invalid points because we do like expensive like the case to version hash later. If we were to move that to the KZG library then if the user provided like invalid points then we'll end up like doing the hash thing which is not good, which is not a good thing to do like if you're in like an EMEA. The alternative is to move the entire function kind of like what George would say have that implemented by go KZG then client implementers don't have to worry about all that crypto stuff. But I think we still need to like keep the assertions prior to like the verified KZG proof so we can fail early. I'm sorry Mofi which hash are you talking about? The pre-compile doesn't have any hashing does it? Or does it have? I'm looking at the spec right now the KZG version hash. Yeah. Okay and you want that to fail you want the the basic assertions to trigger before the hashing happens? Yes. Okay I see. Okay you say that it may be indeed to be better than just heavy then provide the heavy calculations before research. Let it stay probably yeah. Right I mean Alex if you're saying that this is gonna make your life easier I think and since CKZG is a very like tailor-made library for this specific purpose I think that's a good like indicator that we should make the interface better. Now in terms of how the spec should look like because it's not like it's like Python so it's not typed I don't see how the actual assert lines can be completely removed because that's like the kind of like the implicit thing that they are scalar types. I don't know how this the spec will look like if we change the interface but I agree that we should look into the interface to make it nicer for development. Okay yeah I just found these basic checks quite useful so maybe the interface can be called by but my like statement was just about this two basic checks that I did not found useful previously so it's okay no looks okay for me. Okay so I think we can resolve this it is is the kind of takeaway that George and Kev are going to look at whether there's an interface change you want to make here to the CKZG to add another function? Yeah potentially. Okay well let's remember let's create an action item to do that and if you guys decide no I think that that seems like a reasonable takeaway as well. Sounds good. Great while we're on the topic of cryptography George this the last cryptography spec API change got merged any other like context to share there? No I don't think so I think the PR got merged the interface got simplified apparently maybe not simplified enough based on Alex's comments but yeah all in all I think it's like a simplification so not much to report there. If there are any questions we also have like a CKZG telegram group that might be a bit more specific if you have questions of how to interface with CKZG talking to the client developers here. Yep and confirm me confirm let me know if it's right wrong but I believe CKZG now implements these interfaces for client developers to want to be leveraging that is that right? What's that can you say that a good question again? Does CKZG implement the new interfaces from the 3038 change? So that's actually a good question I'm not sure which like I know Ramana had the coronavirus at some point I don't know if the branch is fully up to date with the super new like there is a branch with a new interface I just don't know if it's like the main branch or not I think it is Dankar do you remember? I believe he merged it into the 4844 branch now here that we merged it as far as I know. Yeah it's in the 4844 branch. Okay so yeah I had another question about the interface so like for the load trusted setup function basically is that how we are planning to load all the trusted setup values for the devnet like through a file? Yes exactly that function is suppose you pass it like the trusted setup parameters let that be devnet let that be minimal let that be whatever mainnet and it loads it into the library to do the appropriate steps on the other functions. Awesome so I guess just a summary there for client developers who are doing implementations the CKCG4844 branch now supports the new cryptography interface that was just merged and so that should make implementation much easier. I also know that we now have JavaScript findings available for that which might be useful for EthereumJS who I saw who is starting implementation and then I believe that there are Rust findings and Java bindings as well but maybe someone else can chime in on that to confirm. I'm working on the Rust bindings right now I like hope to have a PR by tomorrow end of the day. I don't think the Java binding is up to date because there was a Java binding on the original KZG repo but now it needs to be adapted with the new interface and I see also that all the bindings are in the repo itself while the original one was a separate repo. Oh yeah the Java binding needs to be worked at the moment. Yeah and our library with the SportsMultipleECC backends currently updated interface for BLCity backends so for those who use the Rust library they can use it. Are there any clients that don't currently have a clear path on the cryptography library to use for their implementation? Enrico on the Java side is that currently blocking y'all? Do you guys have a path to update those bindings? We were discussing with the Bezo team because actually Tech and Bezo are as far as I know the only clients that are requiring the Java bindings. We have definitely a bunch of work before makes this binding really required so we are kind of planning internally the timing for start working on it but yeah definitely if there are some help from outside will be appreciated otherwise at some point the Tech or Team or the Bezo team will will start working on it. Assumption is that it should be an easy binding but yeah I'm not an expert on that so don't have a clear idea. Okay sounds good well I will comment from an observation that you know two weeks ago there was uncertainty around the cryptography APIs there was uncertainty around the crypto libraries and now we have good crypto library strong APIs and most binding contracts which seems like a lot of great progress. Okay gonna keep us moving. Ansgar posted updates in his few things so I'm just going to talk through what he shared in the channel the first one is the modulus change that's ready for merge it now returns the modulus pre-compile or the pre-compile now returns two values degree and modulus and he'd love one person's eyes to double-check the encoding maybe donkrod or yeah probably donkrod's right person look at that and otherwise we're ready to merge any any client anyone have any questions on that that change in terms of returning the modulus in the pre-compile for the encoding okay maybe someone else I mean the current encoding is literally just eight bytes for the length for the number of elements and 32 bytes for the modulus yeah so it's there's no encoding it's literally there's just those two values concatenated so yeah I don't know if anyone has an opinion on this like I don't know how these things are usually done right that was the only kind of question like just to make double check we kind of keep keep with how pre-compile centers this usually and whether there's for example a preference to me it feels like this ordering basically having the not started to buy value first is slightly too terrible because it makes loading from memory once it's once it's written to memory slightly more efficient because then once you load it you don't have to mask the leading bytes because you can just take a memory location with leading zeros anyway or something I'm not sure if compilers are smart enough to do this anyway it will be very small efficiency gains but these kind of things just basically make sure that this is kind of the the standard way to do this other than that I think that yeah it's very much you know how many perspectives on that encoding I think it's fine you can just go ahead and say that this great official fee sounds great so the next one I'm just going to keep giving high-level overview on scar but let me know if you want to jump in reducing the throughput of blobs values right in the pr currently 256 kilobytes and target in 512 megabytes reduced down from one megabyte two megabytes that's kind of like we're yeah we're going to talk about the test we're going to run later but that's the current proposal from on scar anyone have any questions or thoughts on that on scar one question I had is do we want to wait until after we run the kind of network cast to merge this fully or do you think it makes sense to kind of merge this and then we can readjust back upwards if necessary my weak preference would be the latter just because it seems like they say no default but I think either work anyone have strong opinions it's an easy parameter update so great well on scar I say let's let's move forward with merging that and we can always adjust it now if we if we need to adjust it later if we need to and then the last one is the pr to set a minimum data gas price there's been a bunch of discussion on the pr but no consensus reached for now this is again just a constant that we can change in the future but anyone have any commentary or questions there who is driving that investigation or is it just tabled for now I think on scar is the running point on it okay I think we will leave that one open again it's just a big change so shouldn't shouldn't block anything on the client information two other spec updates that one talked through one that just got merged and this was proto one from you from a while ago to I think just bring everything in line with the new fee market update on the consensus specs any additional commentary to add there I reviewed it but you should credit that scar and I think on scar is boarding his flight so it's not able to comment yeah I think the fee market changes are ready and like they're merged so you can continue with implementation there great does anyone have any questions on the new fee market specification or is it blocked on that in implementation today we do have some tests fee market implementation in the dev net v3 dev net v2 as well as the dev net v3 in case anyone is implementing clients and is that are those in the interoperative though is that what you're saying or burrow yeah maybe both we could speak better to it he's someone that implemented it but um yeah they're um they're um spec level tests so they basically fire up the clients and um upload a few blobs and check things so if your client has been graded you should be able to run those awesome okay uh one last spec update that is in the agenda to discuss I believe oh no there's actually one more after this and this is yours Mofi the rebase of 444 on cappella can you give a quick status update on that I'll kind of where things are and what the next steps are yeah um right now it's blocked on the withdrawals pr um we're waiting to get that merged in so that I can rebase 444 on top of that um yeah that's pretty much where we are I think we've gotten consensus of how the rebase should look like and how we want to do testing all this left us to update the withdrawal cappella structures and update the arc and we should have that merged by tomorrow morning splendid great so this was one of the this was probably the biggest open question in the last implementers call but with this resolved um we are planning to kind of actually let's talk about that in a second but um does anyone have any kind of questions on uh the decision around rebasing 444 on top of cappella and how that will change kind of the implementation sweet okay the last um uh spec perspective spec changes Terrence um I saw you open up a PR to discuss a few things would you mind giving us a quick overview there yeah hello everyone so I open an issue this morning there are a few more to do that from what I have observed over the last few weeks so the first one which is my fault I didn't fix this is that um right now the block and blocks are gossip together but we should probably add a note saying that hey the old beacon block gossip will no longer be supported because I think that some confusions that people are assuming that we will still be supporting the block gossip topic which will not so that should be easy just a few lines of notes in the consensus networking spec yeah any questions on that if not the second one is that we currently do not have a way to request blob by by um by um route right because now if a block is missing from attestation you can request a block by route but we don't have a way to request the blob by route right so there's two way to go about it the first way is to implement a blob by route method the second way just implement the block and blob by route method just assuming they're coming together I'm leaning towards the second way just because it just easier you can avoid two calls and um sounds like um some side there's a few people who will agree with me on the discord channel so I'm wondering if people have feedback on this or is there yeah is there a way people prefer so the range requests we're going to keep them separate right sounds like it yeah wait so are the range requests we're actually just discussing this too but like we were thinking it might be better to have a range request with both the block and blob and then just a separate block range request just because like at no point you can't do anything if you just have a blob so like why would we be asking for a bunch of blobs yeah I tend to because I don't I don't see how a client ends up having only a block at this point if you if we have the main topic that gives us the coupled version right the only a block after if the the blob pruning depth is less than the block pruning depth so this that would be a reason to keep the blocks by range request but I mean we could just have a separate request for blob and block by range yeah my argument for keeping those separate is that you know historic sync essentially you can keep historic sync and add like stable and not have to think about it and add syncing the blobs um and rather than reworking to then have to use both of these methods and again my argument is in the future assuming full sharding you would still do full block sync and that element and you would not do the kind of the blob sync which became additive and so that allows us to kind of keep this core machinery totally stable I at the end of the day like I'm not engineering it so I could be in other words I think one more like argument to have them separate just so you can sync both in parallel which but then you also have this implementation like complexity like what happens if one gets here without the another so I think it's a lot simpler to have them coupled from the implementation perspective yeah I agree because if you make it separate then the handling of thing two different things that might be going wrong and you start also thinking about optimistically validated block while we are you're within the blobs when you you give up when you go and try so there are a lot of complexity there if you keep it separate I guess I think about it is just have the blobs as a dependency once I have a range of blocks get the range of blobs rather than trying to interleave them because then you get your historic block syncing you don't have to touch it all and then you just have this kind of secondary follow process that ultimately would be pruned out of the codebase so it allows for that to remain totally separate whereas if because there are going to be different pruning depths on blobs and blocks and what you would be retrieving historically then you now have to like interweave you have to totally change your previous sync process to now consider you know am I doing this method or that method and then once we go to full sharding you'd have to then change it again again that's my argument I will leave it there so like in the event of full sharding can we just reuse a coupled regressed response RBCs and then just leave the sidecars like zeroed out sure then you have a clutch where you're still like switching between the two because you're still going to have to account for different printing depths or you're going to change a constant to assume they're printing it up to the same because now that there's zeroed I'm sure there's plenty of ways to try to work with criminal we can emulate the coupled request response methods by just calling one after the other it's not so much a consistency issue as with this with gossips up since we're talking to the same pair so but we still have to handle the failure cases is the thing we're like one peer just doesn't give us one of the two things yeah and generally I feel like with the the moving window for pruning like that's not too much of an issue so much is like we're not at the like we have some margin of error past where we're pruning where all the clients just like still have blobs and yeah generally it's like having the blob and block like separately looks simpler but I think because we have to deal with these edge cases that might not ever happen it makes it more complex and then just using one request at some range and a different request at another range generally an implementation I think is actually simpler and what happens when I'm my clock's slightly off and I make a blobs by block and blobs by range request that is out of your pruning depth through I return zeros you know there ends up being edge cases there as well right I think that could be resolved generally by just like in implementation we don't specifically code this to like the minimum epoch depth for like what you serve but just don't request past it if you happen to accidentally request past it like clients should have like some epochs of margin of error um yeah and then to the point of like is it more performant to download blocks and blobs in parallel as opposed to like having one big request diva made the point that a bottleneck is actually processing it's not download and if you are requesting them separately you can't process them until you have both so it doesn't really make a difference I don't think we'll be able to come to like a conclusion from this call so should we just follow up on the issue itself yeah that sounds about to suggest that makes sense to me and I was it's um 3087 but yeah I was also um discussing so my second point was basically we need a blobs by root method or we need a block and block by root method right so um any objection for having them coupled for by root the thing with the by root request response methods is that some other clients like nimbus have made the case in the past that sink should not rely on these by root methods as much and maybe even only have these by root methods supported for a recent part of the history I was saying it's much easier to index the finalized data linearly and then start in optimized ways on disk and so on we have whole new file formats for this and everything and then creating this by root method just kind of ruins that by introducing like an additional database index to find everything by root yeah this is I feel like this is more for like missing block from attestation more like from the current epoch or the last few slots which you need to right let's say if we introduce it we can limit the usage of this to the more recent part of the chain so that we don't require like additional database changes in clients yeah that sounds good I mean I feel but also feel this is more like a general issue than for a for specific right because well if we repeat the same issue then we make it much harder to clean up yeah yeah yeah okay so that's all I have those are the three basically questions or ongoing discussions I want to follow up on just to confirm where we're leaving these on the first one it sounds like we should just open that spec changed yeah and so are you gonna are you gonna do that yeah happy to you great on the second one it sounds like there's more discussion that we might need to have for are we yeah do you have a decision on the second one or we can open up a new PR for that one as well I can open a PR but we don't have to like merge it like right away and just let it like marinate and discuss and just see how it goes yeah okay and then the third one it sounds like there's active discussion already right right right are any of these blocking um you know I don't I'm not as well versed in these domains are these kind of like core blocking spec changes or would they be more kind of like things that will make the yeah like how important is it to drive resolution on these over the next couple weeks do you think from a kind of like finalizing the spec perspective well I think the second one is definitely important for the definite three purpose just because like today if you're missing a block you cannot get a block then you're kind of stuck there so the second one is definitely definite three blocking but it's not so hard to for spec and also to implement the third one I think it's fine I mean you can we can spend a little bit more time on it just because like I think it's for the definite three purpose it's not super important to like bad track or best thing I mean it's a nice to have but but but but but it's not a blocker got it okay well I'm one for the second one which seems like the more critical one um do we think that there's a path to trying I guess is it something we can decide today or if not what's kind of the path to getting it decided in the next couple of days so we can continue making progress towards seven at three yeah just feedback from everyone so I will open the PR and I will post a PR on discord thing and yeah just hoping for feedback from everyone okay sounds good Proto it sounds like you have opinions there and so I think you weighing in there and helping us not that long would be really helpful we're at all review I think there's a nice compromise here where we do support the by-route method in some form or another but we limit the by-route usage to a time span not as long as the full by-range support so that we don't have to create a database index to support this method okay are there any other active spec changes or spec discussions that folks want to raise before we move on to discuss the dev net three okay let's move on to discuss discuss dev net three um quickly wanted to just look at this spec overview that we have can folks see my this screen now with dev net three on it I think yes yep awesome um so I believe that there are no changes on the execution layer side we are going to include the modules change that should merge but there's nothing else new that we need to do there on the cl spec we have now merged the cryptography API we've merged the fee market changes and we have a resolution on the rebase on capella let me just quickly make these changes in line for doing it we have so these two are merged so we can take those out because they're in the main spec uh and then the 3052 we plan to keep that in we plan to keep that in the the dev net is that correct yep yep okay great um and it sounds like we can have that this week correct yep um and then it sounds like there's also one other Terrence the the upcoming PR that you're talking about um with the allowing blob retrieval by uh root yep and does that have to be do we have to block the dev net or do we want to kind of put that in the critical path for the dev net I think so just because like there's no way for us to proceed if today we are missing a block and we don't have a block so you kind of dead lock there so yeah but you shouldn't be that hard to first yeah to make it happen okay any other el or cl changes that folks are expecting to be part of the dev net last time we had this upcoming PR to block broadcasting blob transactions by default did we do that is that just the gossip oh no sorry let's no it was it was basically changed to and I think there was debate between Ansgar and Marius about whether we actually wanted this to be encoded in the spec or not um I think that was uh my understanding is that it was like related to each 68 and not necessarily part of the VIP I'm Marius isn't here like I remember like there were two approaches to block game broadcast one is to do it by the transaction type and the other is to do it by the size of the transactions on I don't know which direction yeah and I think that the debate was basically do we need do we need to actually include this in the spec or not um and it looks like we have an action item for Ansgar to update for specifying that blobs must only be announced but maybe that didn't get done I was premature in resolving that action item um so maybe we should fall I don't think this is strictly blocking the dev net um but we should follow up with oh yeah Ansgar is on the call still great Ansgar can you open the PR to do that yeah great thank you announcing and then they can pull yeah exactly okay because they're too large to be pushed on get the default limit yeah yeah that was basically the compromise we came to at the dev con workshops uh to make it easier for the clients to manage like doc specters um okay so we will get that in and then this engine get blobs by bundle that one is merged so we can just have it there this includes withdrawal field as part of the engine api I believe that that were blocked on 3052 on the CL spec to get that in there mofie is that right yeah yeah we're definitely okay so are you going to um once you do we have a PR open for that or can you take an ai to um open that PR once the CL spec change lands um yeah sure um also relatedly I think should we also update the eip because the eip does specify like what the the new header should look like because we're adding like the access data it's not a big deal because I think most client devs should understand that we should include withdrawals but in this in this interest of completeness I think we should also update the eip to just say that it has a dependency on withdrawals yeah okay so are we gonna are you gonna take that as well sure okay um I will make the note here uh upcoming PR to highlight dependency on withdrawals any other commentary it looks like mostly it is the things that are actually the modus change is going to merge and then we need to get the rebase on cappella um and then around blob retrieval by root but other than that the scope is basically locked in any questions thoughts objections cool I think with that then I was wondering if it might make sense to do a quick client like roll call what clients were expecting to have um or participation as part of the dev net um I uh we had gap in prism existing um do folks want to quickly check in and say whether they're interested in participating in the dev net or not uh yeah for lighthouse we definitely want to um we're pretty far along in implementation but currently working on um integrating kzg and uh flashing out sync so once we get that specced out we can it'll not die yeah another mine comes to join two uh maybe not with everything working uh go uh ckzg library yeah need some tests and attention and other stuff not yet merged does that mean that you'd want like a smaller allocation of validators as a percentage of that work yeah sorry I missed who who is that which client another mine yes yes and then reberto how are you feeling about uh aragon at this point oh yeah I think it's very doable we will do aragon likely at this point any other besu teku um I know nimbus hasn't had the bandwidth to get started and I think that's the full set yeah teku very unlikely I think very unlikely for besu as well all good okay well six client dev net I think would be pretty um pretty awesome so keep pushing in that direction um do folks have a sense of like a high level target for when we'd want to be dev net ready or ready for dev net three I'd like to shoot for before the Thanksgiving holiday I think that might be realistic it's sort of like we need um some degree of lag from after the specs completely stable and like I don't know that'd be nice to have before Thanksgiving but I feel like we will need at least a week more realistically two weeks of lag after the specs stable so um yeah stabilize the spec by Monday Tuesday next Monday Tuesday um with that timeline do you would like before Thanksgiving still be feasible uh don't recall what Thanksgiving state is exactly but it's a week after it that seems challenging I'd give us one week um yeah there's a coordination component that might make it difficult but I think we could get the development done by then yeah is the all core devs called this week on Thursday or is it next week on Thursday this week this week and then is the next one on Thanksgiving on the 24th oh sorry I think it's needed yes okay um okay it seems like we should be preliminary yeah I I think luckily we have a very global audience um it seems like we should probably be pushing for the the week after Thanksgiving to have the dev net fully stood it up based on what we know today I think that making more likely more teams could participate which would be good yeah I mean come come Wednesday on that week I just I think a number of us people are going to drop off for a few days and so that means like trying to launch it on the 21st the Monday which given spec changes done maybe the 14th like that's a really tight turnaround yeah okay well why don't we preliminary plan to launch on the 30th of November um which will be the day after our implementers call that week which will give us this week to basically confirm all the spec changes done um next week to get implementation the next two weeks to get implementation done um and then button up any loose ends that the first half of that week and launch dev net three that's unreasonable everyone uh so let me just add this timing targeting launch on 11 30 we will make it by the end of November um and I know we have all core devs on uh Thursday of this week no now I know um and it seems like from a spec perspective really the changes are the withdrawals change um uh and then this one conversation around uh the blob retrieval by root uh because the modest one is is going to merge mophie terrence oh i guess terrence just left um I think the more we can kind of push to get those two spec changes at least having a line of sight to being done um so by time we go into all core devs we can say hey we're basically finalized from a spec perspective I think the better okay um we are at time uh the last update that the two other items here that were respectively on the agenda was the large block spam test um and the readiness checklist the readiness checklist we can update async based on all the progress here in terms of the large block spam test dan is dan lee here from paradigm oh it doesn't look like it the I can share an update from them um we have uh paradigm is going to be running uh the first iteration of that spam test on test nets um either this week or next week um and then we'll move to mainnet and that's going to start giving us data on kind of current network characteristics which we'll be able to use to finalize the blob um size and to just in general build our confidence around network so folks have questions um or thoughts or want to be involved in those tests um there's a telegram group that we have um and we can add you to it just let us know okay I think that's all anyone have any final comments questions thoughts I guess last last ask from me this is my first time ever running this kind of meeting in Ethereum development if you have feedback um or thoughts about how it could have run better please feel free to reach out um I'm Jesse Pollack on discord and telegram and twitter um and would love any uh thoughts on how I could uh better show up for Ethereum okay thanks everyone have a great day bye bye everyone