 over and we'll get okay the stream should be transferred over if you are in the youtube chat please let us know if you can hear us um this is issue 536 on the p.m. repo since slayer call 88 we'll focus on the merge talk a little bit about the reorg um that happened on my birthday and um then open up just any other discussion i want to handle okay cool so merge we have a number of agenda items um first of all can somebody give us an update on what's going on with uh ropston deposit tracking and if this has um if this is isolated to specifically what we're seeing in ropston or if instead this is some more fundamental issue that we might see on mainnet or might see during the merge or anything like that uh who has the update on this well from ladhouse yeah i know we had some problems on our end due to the really long block times up to like a minute or two i think um so we were kind of voting on some old blocks so um not a threat for mainnet unless the block times get to be in more than a minute or something like that um we've got a pr up that solvus moves to a kind of a more dynamic approach in building a block case similar to what teku does um i think that it's looking good on ropston now um so that'll be in the next release got it okay and the block was really long on ropston because of the amplification of the uh difficulty in the prior week and then it slowing down again i'm not sure exactly why i would assume that's the case i haven't i haven't looked into why i've just been kind of fixing the what happens if why do longer block times make things worse like i naively i would assume shorter block times would obviously make things worse because like it might be harder to catch up or to follow them but like i'm curious what is the why is i guess why do longer block times not make it actually easier to track the head like what is the thing that breaks or is not tracking the head it's tracking a depth and then we have to agree on on this depth and the depth is based off of time oh time stamps so that's going to be at least the the the grounding of the reason and maybe there's some sort of assumption uh incorrectly being made paul got it was there some sort of assumption incorrectly being made yeah that's right um just i guess trying to avoid um unnecessarily downloading blocks so you make some assumptions about block time so you don't have to download all of them um in this room got it got you okay so there's like a certain range that you're you're you're like it's got to be within this range we're going to download those blocks and not more but then that assumption fell got it yeah i think we had a tolerance factor i can't remember what it was but um we're well outside of that got it yeah okay that makes sense and was this a lighthouse thing or is this like a beacon chain spec thing at least for us it was a lighthouse thing the way that we were doing the caching um i think there might have been some other clients who are having having issues as well but i can't i can't speak to that yeah i said with prism as well we have um adopted a similar approach as everyone and the face has been pushed into our prod cluster right so this is not a spec issue per se this is a you know an assumption around what you might see in the wild that then was broken more an engineering assumption yeah that's right i think what's worth mentioning as well is that f1 we call it f1 voting so it was is um for better or worse just kind of it's always been a little bit a little bit wonkins of room kind of it's got it's got room to move room to be wrong so it's been a part of the code that hasn't received um the same attention and say what an incorrect state route might receive um so yeah it's i guess it's worked it's it's always worked well enough for main net and it's been the case that when blockfiles get really weird and wonky it's something that we can release a patch for um so yeah it's it's kind of one of those flexible parts of um of the code but clearly we could have done better mommy are you speaking because i cannot hear you your box keeps becoming yellow on my screen oh sorry i forgot to mute okay okay um okay i think there's certainly a desire to simplify this mechanism um and in doing so potentially make it a faster mechanism um you know now that we are much more tightly coupled to the execution there um i this isn't a emerged discussion but i think it's something that myself michael and others want to spend some cycles on thinking about um because it's actually been when something goes wrong on a test net or even main net you know it's a likely culprit to say a sign that there's uh some unnecessary complexity here i intend to just confirm that the disappearance of the high hash rate and hash rate is probably still affecting block times there okay um anything else on ropston deposit tracking uh its impact on ropston beacon chain sorry bobson or any of the or main net are we good on this okay great so the next thing is we need to launch a sepolia beacon chain it's generally there's an issue standing on the pm repo um i just updated this morning based off of the conversation we had maybe a month ago about keeping it the validator small validator set small and generally permissioned um in utilizing that i think we just need to agree on the final parameters and get this thing launched um and i think there's also general agreement amongst the people that have spoken with that launching it sooner rather than later just for the best so that we are just generally prepared uh for sepolia perry is out i think perry would uh when he comes back and help us finalize some configs is there any reason not to do this in the next two weeks okay not not exactly that but one thing also we mentioned on operatives uh is um we should launch a sepolia beacon chain and run it through altair asap but not necessarily electrics so that it's in the same state as like main net uh and and and we kind of work through the process yeah yeah okay yeah i think probably all agree on that okay so when um perry gets back we'll propose a date and a config um that'll be approximately in the next couple of weeks and get it up uh the i think the validator set size is going to be on the order of a couple thousand and be permissioned okay any other things on one would that be the beacon chain terence i think is in charge of naming beacon chains can i uh just clarify not on naming um have we decided uh girly um prior to merge before sepolia or sepolia first because i've heard both recently yeah so the rough feeling that i had was we would do gory first and the reason there was um we will get more like data out of gory because it's like a network with more activity there's more people uh running validators on prater um and i think maybe the the only argument i could see for doing sepolia first if we if we did is like if we do sepolia perhaps like when we're not quite ready when we don't have code that's like right ready for mainnet but we do want to get another test that's run in somehow um and and because it feels like gory what goes on gory should probably be like extremely close to what goes on mainnet because it's like what most users will will use um and and test on so that's like the only reason i could see to do sepolia before is if we want another run on a test that with stuff that's maybe not quite uh ready for mainnet yet yeah i find that argument compelling um actually just to be able to kind of keep things moving and save gory for i mean no matter what the last test that is going to have the code closest to what's run on mainnet um um so i i buy that argument and i we do do gorely and mainnet shadow forks um which help us understand some of the things that come out of that obviously what something you mentioned was actually having a much more um open validator set and having validators and stakers actually test this stuff at scale um that's definitely one of the big things that comes out of gorely right so it's like we've had and we've had that erupted as well so sepolia will be the only one where we don't have like community validators i also like the idea of um pushing gorely to off to which to give us a little bit more time um and make sure that the code is close to reduction i like the idea but i wouldn't want to sign up for a guarantee that the release will be for gorely is the exact same release that we later recommend for mainnet oh yeah i don't think you'd want to guarantee that but i think you'd want high confidence that it's as close you know um yeah but but we definitely would not like frame it to users like download this or both gory and mainnet agreed all around uh i saw my rick had a thumbs up as well um was this touched on in all core dives last week not the order now okay we can do um maybe do a round of communications over the next few days and just see if we do want to swap i guess there's nothing been quite official here but nonetheless i think we should get this sepolia beacon chain out soon just so that it's out and ready for our use uh no matter the order okay we'll circulate uh i suggested date and configurations um probably in about a week when perry gets back and we can kind of finalize that okay something uh perry pinged me about this morning was robson ttd and discussions around that and the choice of that i think we tem we need to choose that Monday so we need to choose that now no no we need so i think i have a very strong uh preference for a number um so the thing i think yeah the thing i think probably makes sense is um picking a number uh and we've had someone on our team at the ef uh mario look into that um communicating that number with the folks who run validators uh like on the client teams and and on the testing teams making sure all those are upgraded and then basically publicly communicating the number so that uh in the worst case you know it doesn't affect the network if somebody decides to mine up fluid that ttd um so my so i guess what what i would suggest is like um we have a number suggestion and and like some hash rate assumptions around it uh right after this call we can send it to all the client teams to make sure that there's no major objections um and then once uh once Bellatrix hits which is tonight like in i think like 10 hours or so from now um then we run a ttd override on the validators that are controlled by uh by client teams and and and the ef um and then tomorrow basically like exactly 24 hours from now we publish this number uh so everyone has a chance to upgrade um and then obviously as soon as we publish a number there might be some incentive for people to mine towards that um we've purposely chosen something that like Gordy should not hit by next week but that we can then uh grant hash rates to accelerate ourselves um so the goal would still be for it to be hit you know sometime later next week um but like given the current hash rate on Gordy it's it's it's targeted for much farther than that okay so the quick on that is we plan on circulating a number today and to release that number publicly tomorrow morning yes and assuming yeah obviously this assumes that Bellatrix goes live without a hitch tonight and and you know there's there's no issues there uh but yeah okay and this will all be ttd override by the cli this will not be cut into releases which so at least from the ethereum blog we're just pointing we're pointing people towards the latest releases for ropston and then telling them they need to do a ttd override on those releases got it okay any other questions or comments on how we're going to handle ropston ttd and fork over next week i we expect and release this out with this new ttd value after announcements so the only reason i would see that is do if if i run a node on ropston like two months from now does that node need the right ttd as part of its config still after the mergers haven't um it kind of depends on how it's synced if you are syncing from a state from genesis or from a week so which i say before you definitely need it if you were after then it's not going to be relevant okay yeah right so so i think the what i would suggest is like whenever because like right now there are client releases with a ttd in it for ropston whenever the next client releases they should update that value to what the actual ttd ends up being um but i wouldn't like rush releases to just have that value asap and we should just communicate that people need to do an override yeah i agree i think it's a good idea in general to get people used to the idea of overriding ttd yeah it's a good practice i think it's a good skill to have oh and one last note on that um i shared this in the awkward devs chat but i'll repost the hack md here i'm using like a hack md that was put together last week by marius uh which which kind of mentions how to change the ttd on every single client i posted it in the chat here if there's something that's missing or wrong for your client uh please send me a message and i'll make sure that the blog posts obviously does not have the wrong information all the values in this post have the fake ttd so that'll be changed everywhere but just generally the command uh the flag name all that uh would be good if if you pull double check that it's it's accurate okay thank you anything else on robson ttd the robson game plan excellent pardon me um any other discussion points related to the merge um this is our relative and i know perry's not here but i think fly is wondering up once robson has merged we probably can deprecate keong just because i would love to save some money if we can and i don't think we have a comfortable conclusion right now by just something to note is is it like a significant cost to keep keong up to the mainnet merge because we do have like a bunch of applications who've deployed on it to test stuff and they might not already have deployments on robson so if it's not like a significant cost i think keeping it until the mainnet merge and deprecating it like really close after that would be a bit better oh i see i was like aware there's application you said oh yeah that's a case then yeah i would be happy to keep it up if kinsugi is still up uh i don't even know that i think we can deprecate like literally today i don't think yeah yeah um but but for kiln itself like uh uh yeah there's at least like five or ten big applications that i'm aware that i'm not sure if they're still actively using it but they've deployed on it and then uh it makes it easier for others to then come and deploy and try stuff so i would keep it until the mainnet merge and if you're listening kiln will be deprecated at the mainnet merge might as well start signaling that now next year it's um other discussion points around the merge today so someone mentioned on twitter today that they're they haven't been able to add validators to robson since genesis has anyone like successfully done that or can i quickly sign in to check that it is possible i mean if eth1 data voting was borked that would prevent that would slow it down being processed period well i mean it depends was there still a majority consensus even when there was an issue anyone that followed that closely i don't know if there was a consensus sorry okay the robson beacon chain is showing uh zero new validator zero pending validator so it looks like the beacon chain can't see the um can't agree on the deposit uh contract state at all the fixes on lighthouse and prism are those are not yet released on to those nodes as a release on to the nodes my i think my last understanding was that we were voting correctly but i've been kind of not the lead on this i've been kind of the reviewer so um my knowledge is patchy yeah ours is for this as well okay um let's circle back on that right after the call i think we need to make sure that if those fixes are out they should be agreeing on a value at this point um and inducting new deposits and if that's not the case we should fix it that's certainly i mean tem the fact that if if people can't come to consensus on this value then deposits cannot be added to the beacon chain and so that's almost certainly what's going on here which is not great i can we see can somebody investigate how many uh deposits were made to the bobston beacon deposit contract just curious okay someone can investigate that while we move on uh would the point one fix to members but um there might be more needed we'll see uh our resident expert is on vacation the eth1 data expert hey that's almost a full-time job it's many of the things that you design or we design that i would have never expected that to be you know a major source of error but it has been there's other seemingly more complex things that work flawlessly all the time it is the grossest wonkiest mechanism though that we have i guess everything else is um you could argue more elegant okay so there's been 376 transactions there um we can clear that queue pretty quickly once we begin voting correctly but there's that's in there's also the follow distance to contend with there um i think that we can definitely if we can patch this up in the next day then these validators will be inducted before the merge let's try to do that okay other merge related discussions there was a shadow fork yesterday okay um yeah we don't even realize that i think there were a couple EL clients which still had this issue where they struggled to produce blocks uh with transactions in them they would just return an empty block um off the top of my head i think it was based to an aragon but i'm not 100 sure yeah it was but i'm not sure if it was every combo or if it was it's just specific combos okay perry has it yeah sent some stuff um i think for aragon it was for aragon it was every combo for base two it was every combo except lighthouse and then for nethermine there was one combo with nimbus um that didn't work but it's not clear for nethermine if it's just because it didn't happen yet or if there was actually there it's because um i assume that nimbus have issue with timing so that is the reason right another mind issue uh to be honest we prepare something similar to gaff that we will wait for um some amount of time and we will prepare the and the payload but we only mask this issue we hide this issue and this is still uh need to be fixed on cl side okay so there's still the issue where there's almost zero lead time between the prepare and get on a number of sales i think on lenebus from my observation of course it would be good to check it right okay any other comments on main net shadow fork six i think there was a problem um with not sure definitely not nethermine uh probably aragon and participation uh was uh lower right now so i'm not sure what happened maybe parry or marius uh will share details and neither of them are here all right if there's any uh follow-up discussion on the shadow fork when we take it to discord all right any other merge related items for today so i did want to give a little bit of air time to the reorg that happened on may 25th um you know i think the core of this was a an update to the fork choice that was deemed safe to roll out continuously but then an additional um bug in spec was found that compounded that issue casper or anyone that's been close to this do you want to give us or i think a lot of people here are familiar with it but give anyone listening also some familiarity with what happened a week ago sure um yeah i mean so essentially um the initial setup was that two blocks virtually roughed at the same time um both blocks accumulated roughly the same weight and then essentially the validators were roughly split in half between running the proposal boost fork choice and the slightly less than half not running it and not the other way around so slightly more people were not running it and um the problem was that then six block proposals in a row were running proposal boost but basically because of this known bug where proposals don't rerun the fork choice before proposing they essentially falsely attribute the boost to the proceeding proposal instead of just looking at the attestations itself and that way um essentially because we had six block proposals in a row running the proposal boost and not rerunning the fork choice before proposing um they basically extended this slightly less heavy chain and eventually one proposal was not running the proposal boost and therefore saw the heavier chain as leading and um yeah that kind of concluded the seven block reorg um and as danny already mentioned it's kind of a unfortunate situation where validators were split between running proposal boost and not running it and on top of that this buck of not rerunning the fork choice so if proposals were rerunning the fork choice this actually wouldn't have happened right and that was our assessment when we rolled it out was that although this might have led to a split view on the order of a slot that would quickly be resolved um but then the compounding with this additional bug on the timing of when to run the fork choice that assumption was totally broken um so I think there's there's a bit of a discussion on you know if we are rolling out fork choice changes um to one ensure that we do an analysis on the safety of how to roll it out um and whether that should be on a coordinated point like a not necessarily a hard fork but essentially telling everyone to update their nodes for this event and to enable it at one point um and then the other question is obviously if how to account for potential unknown bugs that when we're doing such an analysis one of I think the primary reason given the analysis that this would not lead to long term splits without this other bug um it's very nice to not have to manage two code paths um that are conditional upon some epoch uh and so I would potentially argue for being able to do it in the future um but I guess the one thing to note here is that you don't have to manage these code paths in perpetuity like a like a hard fork when there's a logic change you'd kind of after the after you pass the epoch on the next release you can actually do a you can eliminate the old logic entirely with the fork choice changes so even even if we do go to the the point where you do these in a coordinated point you have two logics in one place you don't have to maintain them in perpetuity okay which is nice um any other any comments or questions or thoughts on thoughts more than here thank you everyone for digging deep and uh figuring out precisely what happened there I guess my recommendation would be when we run into something like this again uh just to make sure that we discuss it much here and and write write down at least our um analysis of why we're making the decision to roll out in a continuous basis or to roll out at a coordinated point okay uh solid I see what a question about seconds per ethane block this um this is an estimation to get to an approximate depth um this does not actually matter like so if you if there were three blocks in that range um you would still you could still agree on the block even though the depth wasn't exactly what you estimated so it's not um the problem isn't actually related to the spec that would just change the amount of blocks that you're kind of digging through um or the depth that you get to but it doesn't change the ability for nodes to agree at the depth and it did used to be a precise block depth if I remember correct correctly but was simplified to the seconds uh estimation for simplicity reasons uh but I would have to pull up some old issues to see exactly why that was the case yeah it was kind of ultimately because we we have to make assumptions about the timestamps so we decided to try and only make assumptions about timestamps I'm not sure if I was right or wrong but um it made sense at the time okay so so if the seconds per ethane block um was a different value it didn't solve the problem right well think about like if if seconds per ethane block is 15 and you want to get to usually about a thousand blocks deep but in main net or in on the network you saw the actual time between blocks was 30 seconds then you would only get to 500 blocks deep but that you can still come to agreement with each other even if um that averages is off okay I think that again the problem emerged in that there were assumptions made about what blocks needed to be downloaded and investigated with some margin of error and being outside of that margin of error on the block times um okay cool thanks for the reorg chat I linked we linked to the barnabays uh discussion visualization of the seven block reorg it's very good if you want to take a deeper look okay are there any client updates people would like to share today any research or spec discussions I would highlight one thing which is that we've started posting a bunch of PRs on uh on light clients now um two things have happened there were discussions in Amsterdam around light clients in general and a lot of that feedback has been incorporated into those PRs now so I think they're they're getting to a point of being really nice um I think the approach we're taking right now is to try to split off into slightly smaller pieces and then get reviews on those smaller pieces and then try to get this ball rolling um what we're using it for well one one cool use case that that has come up is that if you're not running a validator you're just running some random infrastructure you want to read the you know your state at some point you don't really need to run the full consensus protocol um so to that end we've actually developed a little standalone application that uses the light client protocol to kind of feed an execution layer with um what's it called purchase update and the blocks um it's really tiny and uses very little bandwidth which is nice uh and of course that depends on these PRs getting getting some attention as well and then eventually that people um run with the light client protocol enabled so I'm kind of I wanted to plug those PRs right now as as has been really interesting for those use cases in particular hoping to get some eyes on them I will put my eyes on them I ask for a few others to do so as well okay any other discussion points for today research spec or otherwise okay um and I would encourage teams to send a person to all core devs for the next couple of all core devs because I think we're going to continue to be talking about timing testnet launches and different things like that that we we generally need some agreement on both sides for so if you can please join us there and I will encourage them to join us here okay thank you everyone take care have a good one thanks everybody thanks everyone all right all right all right