 Okay. Yeah, welcome everyone Danny is not here today. So I'll be facilitating the call and Let me pull up the agenda But I know the first thing was The the DevNet so it seems like it went pretty well Might it be some minor bugs with block reduction and the sync aggregation, but otherwise We successfully forked the DevNet it looks like and yeah building a chain Was there anything in particular anyone wanted to add to that right now? We might probably might want to have a chat offline about whether we want to fully declare that success or not It did it did pretty well, but I'm not I'm not sure it it wasn't like an A plus So I would say so yeah, maybe we'll have a chat offline about whether we do another one Very I mean, I think we definitely want to do another one, but it's still exciting to see progress even from the last one Yeah, totally well done everyone We did deploy some kind of last minute fixes and I guess this didn't go that well I've seen a lot of issues. So still on our side. Okay We will circle back to Altair after the client updates. So I'm sure we'll have another touch point for that So then yeah, but anyone like to kick us off. Let's see. I think usually we have a randomized list. So Let's start with White House Hello, everyone So we've been working a lot in our 1.5.0 release. We published a blog post last week, which details the features it will contain Altair progress is coming along. Well 1.5.0 will have Altair test and support out of the box We're also working on testing some new upgrades to our networking stack There's been one last rare bug We've been hunting but luckily managed to get a backtrace on it in the last couple hours So we'll be climbing through the function calls. I'm tracing that one down. So that's a good sign You can expect to see a 1.5.0 release candidate in the next week or two And after we get 1.5.0 out, we'll start working on the next release And that should it contain weak subjectivity sink Remote signer and some nice CPU savings looking at perhaps an order of 40% reduction on Prada or CPU usage So that's it for us. I'm excited for the weak subjectivity sink I was sinking the chain from scratch the other day and it took a while Yeah, I know it hurts I think us developers on it are going to be little ones benefiting from it so much because we can just spin up nodes quickly Mm-hmm All right, great. Yeah, looking forward. Let's see. Let's go to nimbus next Hi, can you hear me? Yep. Yeah, perfect So in the past two weeks, well, we had a couple of people at ECC Besides that on the dev front, we did a lot of updates To have a proper working vader talk client and also Rest API and of course, I'll tie work so that the current testnet Goes well And I would like us to mention that we are starting to work on weak subjectivity sink as well Great. Very excited Next let's do prison Hey guys, Terence here. So we released version 1.4.2 last week And he has the awesome doppelganger feature so encourage people to try it out And they also most importantly contains updates for the long haul fork So don't forget to update it before next week And then other than that, most of our resources are a tear optimizations We have been doing lots of work on Internally refactored obviously endpoints under one place And aside from a tear we're mostly just bug fixes with the e2 API and then and then the slasher and yeah, and yeah, that's it. Thank you Great and Terence are the e2 api updates are those in the release yet? Yeah, those are Yeah, those are in the release and then we encourage both to try out and a lot of people have been trying out and then and then Opening issues. So that's awesome Great. Yeah, I'm excited to see Okay, how about loads are Hey, everyone So I'm very excited to share that finally we have two validators running on mainnet And they are doing just fine Almost 95 96 percent of total possible rewards we're getting so super excited to join the club We On the side we have our my client prototype functional and hope to deploy to a proper domain soon Kaiman demoed last night at toronto idiam and went super well On that line will continue to do research on other styles because this one is rest base. So Excited to try other strategies besides that we added support for the e1 fallback functionality And we are working hard on loading memory consumption. Thank you for the help on that regard That's it Thank you Yeah, very exciting. So you guys have a valder client now. So We have another one to add and that's great for client diversity So it's very good to see the progress on that definitely And next we'll do teku Yeah, hi, um So we put out our 21.7.0 release This week, um It's got Mostly just a few bug fixes a couple of things we've mentioned before that now actually in the release The main one is a file handle leak. Um, I'm driving on lead p2p a bit of a corner case there that slowly leak file handles. So that's cleaned up Coming up so still in the in the development branch, um Is a whole bunch of changes to discovery as well We've done a lot of work there to be more standards compliant And I have more nodes in the node table and kind of do it all all nicely Uh, that's looking pretty good um And starting to investigate improvements around Uh, how we store historical state so that we can query that faster Plus a whole bunch of a little cleanups and getting ready. Um Outer wise probably the only thing is that we now support the contribution of proof event on the On the events end point So you can track that best of it as it comes in as well. I think that's us Let's see and then yes. Sorry us. Did you have anything you wanted to add? Uh grand dino, right? Yes, so so let's run vendina team. So we worked on various small fixes and optimizations and the probably the biggest one was improvement of Of our attestation packing algorithm. So and another major thing is that we We proceeded with this experiment that I talked last time We will try to To run multiple folks at the same time And it's staking a bit longer than we expected, but hopefully we'll have some results in In a couple of weeks of running the client But otherwise, I think This will need another two weeks, but basically I would say it's a it's a major thing and if somebody will try to experiment with something like this, then it's probably much easier Uh to to take just an existing client That does a forking in a regular a hard forking in a regular way and just run this client in in In a two modes Basically, so so we have two separate clients Instead of one client coming Running the two hard for so so far, but that's probably all For us Great. Thank you And let's see. Was there anyone else? I think I got everyone We keep adding more clients Okay Then in that case, we can move to talking about Altair. So there is a dev net this morning And seems like things went pretty well, but there's still some places for improvement. It sounds like In the meantime on the spec side, we've had One release kind of two releases in the last couple weeks So things to improve both testing and then also improving the aggregation count With sync aggregators and then also tightening the gossip validations In this most previous release beta 2 That one, uh, I think was supposed to be in the dev net this morning, but maybe didn't quite make it out But either way these will These should help harden the sync committee duties on the network and uh, yeah lead to lead to better Better chain Um There's and yeah, like I said, lots of testing uh with both those spec releases So please clients look take a look at that if you haven't already Um, so yeah next week we can move to planning I'm like paul kind of said earlier. Perhaps, uh You know, we move to more asynchronous thing here, but yeah, does anyone have any Any thoughts there? Um, I think we wanted to see how a dev net two went And it sounds like we want a dev net three so that might push back Altair itself, but uh, you know, that's something we all need to decide I'm open to an altair three next week if um, if parry's up for it Yeah, that sounds good to me as well But does this mean we decide today when we want to do a month altair spoke and then decide to Abort it if next week goes badly or do we just take a call in two weeks? We could probably talk more synchronously. Uh, I don't really expect dev net three to go poorly But I guess we'll have to see how it goes Yeah, I mean, we're consistently seeing the transition work. Well the piece we're not seeing is the Yeah, perfect inclusion rates on on sync committee signatures um I I think I'd be tempted to to push it out to to p mont at this point and Get that feedback given that we want to kill p mont after this anyway So if we really decide we've got to make massive changes, it's not the end of the world But it just gives us a that's somewhat more realistic use case to see You know how inclusion goes on sync committees and so on Yeah, I support that as well. That's probably pretty good. I wouldn't mind Um, you're getting rid of p mont as well. We might as well do that to it Something to consider if we do it with p mont is that we have to get we're getting users to run it now So we have to make sure it's all included in releases and things like that. Um, just something for everyone to consider And I hope that I will be able to use the next few days to Kind of I don't know all these issues. Do you think that would be great? We should definitely keep an eye on the sync committee aggregation like adrian mentions and Yeah, I mean generally I think the progress is going well. So, uh, we'll just keep pushing forward Yeah, I mean in terms of In terms of the fall for p mont, I think it needs to be Probably at least a couple of weeks out anyway because we need to get that The conflict for p mont updated in each point and a release so everyone has to get a release out And then we've got to convince users to upgrade or we'll just go into non-finality and Kind of cause chaos just because people didn't upgrade more than because there's a problem with the fork Um, so we want to do a fair bit of lead time. I'd say Was there a tentative um date for forking p mont already? No, I think we're going to try and decide that on this call And jump what jumps tonight for me is a month Um three weeks or three or four weeks Yeah, I think the minimum would be two three is probably better Yeah, I think I think two weeks work three four weeks work with us since we still have to merge all the helpful changes into Into into our mastery branch. So That was a vote for four weeks was it prison? I would say Terence you do have a name. Yes. No, no is I I would say between three to four weeks. Yeah So three is fine too Okay, I mean sounds like there's rough consensus for three. Um, you know, obviously if something comes up we can reevaluate But uh, yeah, I mean the sooner we move to pyrma the sooner we get to main net. So Yeah, totally if it's if it's useful. I don't mind. Um Like if we want to spin up another dev net next week, I'm I don't mind doing that It's fairly low input on my end. Um, maybe I just want to stay up for it But we could we could spin one up if if that's going to be useful to anyone if they have any problems I'm happy to do that. So Right and that may be helpful. But uh, yeah, it does seem like the forking part is going well So then maybe we can just keep the dev net two running For these various, uh, you know debugging issues Yeah, sure I guess if anyone would like very strongly prefer a dev net three then, uh, you know, let's chat, but Um, I guess we'll just see how that if there's demand Okay, so it sounds like maybe dev net three, maybe we don't need it Uh, I guess we'll look more at dev net two from this morning and have a better idea And we can, uh, discuss that asynchronously And then it sounds like there's a rough consensus around, uh A pure amount four can stay three weeks. Uh, once we've gotten some more Debugging updates then and releases into clients and then that pushed to users so everyone's aware And uh, then we can do that That's one that would be proud. Um Let's see from here. I'll move on to other different types of updates Was there anything with alt here that anyone wanted to raise before we move on? Okay In that case, uh, let's move on to research updates or updates with the spec generally Does anyone have anything to add here? Okay, sounds like no in which case we'll move on to our next topic the merge updates Does anyone have anything to present here? Um, okay. I'm not to call you out, but maybe if you have something to uh, update there I can give like a short update on that um At first, uh, ea p 3675 has been merged recently Uh, thanks a lot for ea p editors to give it a green light um It's in a draft status anyway, so it will be definitely updated further and more clarification will be added on demand There is already a fallout discussion in dpr thread regarding some points so but anyway, it's already should be considered by client developers as the thing that will be implemented and It would be great. They starting to take a look in order to facilitate further development of the spec Um, also on the big and chain side, uh, the big and chain spec The merge spec has been rebased to altair many thanks to proto who did this job um I've also opened a pull request with our base This the spec to rondon. It actually adds the base fee per gas A field to the execution period and adds a couple of verification rules to the gas limit and to the base fee So it's open now I guess yeah, it's it's pretty straightforward. I guess it can be merged relatively soon Also, there is an opened pr For the p2p interface for the merge spec as well um, so that's That's sold on the merge side From for myself Cool. Thank you. Yeah, it's very exciting to see all the yps and the continued progress on the beacon chain spec And you know that'll sounds like that'll be the thing we focus on after out there. So good that we're getting it already Okay, uh, we can move to general spec discussion Anything else anyone wants to bring up? Is there anything uh anyone would like to discuss? just regarding the forking out there um It might be handy to have like Maybe f-staker could be involved in this or something, but it might be handy to have just a resource That's a table of all the clients and which version you need to be on for the out there for This is for PMO that is and whether or not that version has been released Or not yet. I'm kind of like a canonical source of info Um, I don't know if there's anyone from math staker on the call or anyone who's interested in doing something like that Just thinking it might be good to to start to get the word out ASAP that you know You're gonna have to upgrade your client because we're gonna fork out there Uh, we're gonna fork PMO with out there and and this is where you should look for updates We are planning a blog post about that for prism But I do think it may be beneficial if something comes from if staker or or or EF for a more general blog post Yeah, the more channels better Yeah, we'll blog post it as well. We'll make lots of noise about it on our end too Yeah, definitely a great idea Uh, something we should definitely do And yeah, and the more channels the better Um, I have a quick question to quite a bit matters about this week's subjectivity sink What is like the rough targets in terms of eight For it was in this future. Is it like Out there or near near south I think technically already has week's subjectivity. Um for us probably before out there, I would say maybe in the coming month or so Not sure about anyone else For prism you'll be after a tear probably I would say um a month after a tear ish. Yeah Lots are already have it implemented since a month ago We've also have some progress on this and we plan to finish it within the next month We probably also will have In a month or or so the client is generally ready as As a tolerate loads the estate from from the Anchor state, but I would like you trust the other teams what what were the main Uh things that you found in this week's subjectivity implementation as as far as I know, uh, you need You need the back sinking of of the old box before the Before the state that you load and is there is there anything else? That needs to be done during this Just be subjective to sink Uh, one of the surprise things is you need to check the signatures when you're doing the back sinking of blocks Because they're not included in the hash Um, otherwise, I think it was fairly straightforward. I mean, you know As much as anything of this But nothing nothing surprising. It's just being able to start from the state and go forth from there Do do like a The sinking in in the reverse basically do Do do you some mechanism which actually verifies that you end up in uh in a state that you just loaded? Yeah, so we we work backwards, but we work backwards in batches. So we request I don't know what the actual number is but say a hundred blocks at a time um Kind of from The state we started with a hundred blocks back forward and we check that matches Let me get I'll kind of keep walking backwards. Uh, it's it's a little Optimistic and that we've got a few batches from different peers at the same time and then Check that line up and that kind of stuff, but The ultimate it it lets you Not download too many blocks from the wrong branch before you discover that it doesn't actually line up with the state you've got And then the check that you are at the genesis that so you expect them to be No, we treat the state we started as That's definitive because you can put anything you like in the blocks ultimately The only the only reference point you've actually got is the hash in the state you started with Okay Right you you have to check signatures But if you as long as you own one validator basically you can get something that lines up back with to the genesis block because you just sign a block that That has the parent hash of genesis And it'll it'll look completely valid all the hashes line up Okay, but theoretically I believe that It would be possible to to make a different chain, which Let's assume you you've got a wrong snapshot from from an attacker and you You probably will be able to end up with a different genesis not the The main one or the community one I do this theoretically possible, right? It is possible that the attack would be sloppy and make it obvious that they've led you astray But more likely they would just line it up to the genesis. So the check doesn't give you any extra security Ah, okay, okay Okay, okay, we're interesting I think the the solution would be maybe to to have some checkpoints uh Inside the I mean between the expected genesis and snapshots and Yeah, as long as you as these checkpoints are burned into the client and We assume that the clients are are not adversaries and then Probably that would work Don't think so Kind of the same thing So if if the checkpoint you have is within the week subjectivity period, then yes, you could verify fully from there They ultimately want the state from there. So you can actually verify the block transition sport But the the checkpoint state you're starting from is the checkpoint like it's it is the the one known state that you're being told This is on the chain Can't easily trust stuff before that um because it You might have had valedicts that have exited and withdrawn all their funds and then sign a completely valid looking chain So there's there's a number of heuristics you can do to start detecting that and and so on but ultimately Your key thing is that you want to start from a state that's known to be valid within the week subjectivity period And you can do that either with a state or with a root hash And where you get those from is Kind of the big question in in all of this um But but it is that it's the checkpoint you're starting from that's that's what's giving you security and checking back from there is kind of less useful because all your transitions are from the state you started from anyway And if you try to sync From the genesis, I mean the other way around not the backwards, but the forward I'm just thinking is is there any benefit in terms of Of security here So so then you can be led astray by validators that have exited because you because of the weak subjectivity I mean, so not enough validators have exited yet, but theoretically it could have happened by now I think and certainly at some point in the future become a a possible attack That that you can have a chain made up of Signatures where you have nothing that's slashable Um, which makes two completely valid chains One of them is the canonical chain because we it happened in real time and one of them came along afterwards Um, unless someone tells you which is the canonical chain Or you start applying heuristics like I can find more nodes on this train. Yeah change. Yeah, I think it's Mr. This I remember this discussion sometime before Yeah, we had this okay Okay, so so basically just to see backwards and and that's uh, all Terms of checking And signature Yeah, and the rest of things is just details So making sure that your client is able to hand or rest API requests from Before you have any states Um, so there's a whole bunch of breast APIs that you can't answer because you don't have a state Particularly that was natural because we have a a mode that will prune anything any finalized states anyway to save this space Is there is there any cause that that would affect the The process of the score No, so the the operation like in terms of tracking the head of the chain and performing validated duties And participating in in network. You don't need any states prior to the latest finalized Um, you do need to backfill those blocks at the moment. Um, hopefully once all clients support Checkpoint sync then we can not necessarily download all the blocks But there's then questions around Who is storing them? Are they going to be lost forever kind of thing and so there's some other problems to solve there Okay, so so to summarize there is no much magic They're just basically loaded The state and seeing the blocks backwards and that's all pretty much Yep, correct Okay, thanks Um, is there a in your hair a detailed description of the algorithm of this sync like starting from Getting the checkpoints Going through the state the unloading and so forth down Was that all I don't know Oh, okay, I was gonna say yeah, nothing nothing written down, but uh Yeah, basically just start from the state and then work your way backwards But it is very important. I would say to backfill and it you know It's I think even kind of on us to like say that we have the norm that you should backfill Because I'm like ageing was kind of hinting that you might have this huge problem where suddenly no one has the blocks and I don't think we want to get sad say I mean in terms of how it all works the whole spec is Is designed to be able to take a state and a block and you can always apply it so it's kind of nice that There's no real reason to start from genesis in terms of What you're having to find you just don't need it um But yeah, absolutely Please do backfill blocks don't make it optional don't even provide a flag to not do it just just do it It'll be good for the network at the moment Yeah, that's our approach as well But I mean long term it would be nice if people could run Like validate us without or generally for nodes without having all the history always present Yeah, absolutely. There's a there's a chicken and egg problem that's been going on for a while here in the We didn't support checkpoints sink in a lot of clients because there's nowhere to get checkpoints states from um And it was a pain to start with Now that if you're providing it it's kind of centralized, which isn't is an ideal we'd like more than just them But at least it's a starting point. Uh, so small clients start supporting it It becomes more viable for everyone to do it. Hopefully more places to get it from Hopefully then we start to address this problem of where we store old blocks But so not every client has to have it um, I kind of just Keep working this problem until we are we are able to just store the very latest stuff and in each running node and have reliable ways of getting older stuff Once it runs, we do need to verify the blocks and block signatures when you're back for them The block hash that's included in So the parent root of each block is actually the Hash tree root of the beacon block not the signed beacon block. So the signature isn't included in it So most likely you're going to get the right signature and it's not going to be an issue But it's possible that I can give you the right block with the wrong signature via RBC, I guess if you ballot well as long as you've got to let it there That's fine. But you've got to validate it somewhere along the line there um To make sure that the signature is actually the one that matches that block Otherwise you all will store the wrong fit wrong signature and then essentially serve those wrong blocks or those wrong signatures to other Other nodes when they request them from Yeah, so so basically the block signs the signature Yes, it had a proposal to add an extra field to the beacon state to include the The block signatures as well in there so you could just go back with that vote referring signatures But I didn't make it into altair, but maybe we'll include it one day I think it's probably still open on the aspects repo if anyone's interested as a PR You would also need to Recreate some states to verify those signatures, right? and No, you you can do it without creating the states In fact, there's no way to create holder states when you start from before the checkpoint you start with So you can't run the process backwards basically um But it's it's just verified with the battle latest public key Which you can get from the current state because we never lose them. Oh, I get it So you can't verify that The proposal was due to propose that block But the proposal index is in the hash. So if you're following the hashes back, then you've already verified that Is right Yeah, I got it. Thank you for thanks a lot for our clarification Regarding the history. I'm just wondering will there be a uh, a big demand for history of beacon chain after the marriage before the marriage point Uh, maybe somebody was discussing about that because it looks like after the marriage the there won't be Uh, I don't know. Maybe Stakers will like to see their past performance But otherwise as as this history of beacon chain until the marriage point doesn't have an executioner History of beta, uh, don't you think that maybe after the marriage Clients will not sink The state before the marriage and maybe this will be a common behavior Was there such a discussion? I don't know if I've heard something like that I would kind of suggest to Keep the full state and you know our history I should say and as Mars, you know as much as possible I think we should have the norm that we have the full history um Some of these things we've been discussing around like, you know, these longer term projects around Serving historical state are super important. I'd say they're like parallel streams of work um But yeah until then uh, you can just store everything they can And yeah, maybe down the line. There's different sink modes or pruning modes that uh, you know drop state before the marriage, but uh We're not there yet I think in general Down market If no client includes an option to prune old blocks Then most people will just use what's available if clients start including options to promote blocks, then Um, I think you will see a lot of people using that option Yeah, absolutely. That's and that's what we've seen on the e1 side Get stores or blocks by default. So most people have them available Yeah, I think the This this question came to me because I was thinking that That for users it may be a bit hard Uh, you know to to use the let's say api of of past blocks. For instance, where we'll uh, you may have We have past block of Work chain which is After a while, there will be a block that has the same block numbers on both Chain basically on both chains. So there will be a block number There is a block number of number thousand one thousand It exists on both on both of our chain and Pick on chain and from user perspective, well, there will be some Maybe some confusion You know, which which block is number one thousand we could Go ahead like that I was gonna say we could just say like just from a social consensus standpoint that All beacon blocks are plus 100 million or something just to make sure we don't have collisions on the numbers for UX purposes It is interesting question because main net is five million blocks or so ahead of where the beacon chain will be So when the merge happens The head block number will drop backwards by a million or you know, by a few million Consensus blocks, we still have execution blocks counting up from five ten million or whatever they're at Forever and so they'll always be conflicting yes, I think I think it is something that we just need to be clear in the rest APIs of How you're referring to things Or in the jason rpc rather Um I don't know how much work has been done on this today. It hadn't occurred to be until now. So I might just be a bit slow Yeah, I'm just gonna add it's almost a bit of an api issue But yeah, I mean the more I think about it the more I do see the conflict so One thing we could say is like hey, this is a slot number on the beacon chain And then we can keep the block numbers as they are on the execution chain um But yeah, that is a good point You know, we're also going to have shod blocks as well I guess people just have to get used to being specific about what type of block they're talking about and roll out blocks I'll just we'll just keep adding more blocks of all block numbers have a prefix basically which Indicates where they're from we just keep a list somewhere of you know prefix uh one billion is Execution client and or prefix zero is execution client prefix one is client prefix two is shard and And then these prefixes just you know are a billion plus Whatever the actual block number is so that way when you're seeing a block number It's always a big number, but it always starts with kind of a hint as to what kind of block number it is I think the prefixes or or some magic with numbers may affect the Current execution layer I believe There are some contract that are using block numbers Or maybe indirectly I using that but the only one way that just came to my head and I don't this is not nice but Maybe we'll contribute to the To the discussion. Maybe we could during the match we just roll the beacon chain Block number slot number to to the future which is aligned to the last full block and the I believe that maybe in future actually we will just forget the the beacon chain For the match because I personally don't see much value for for users of this history Then we would have a liner block numbers Basically, so so just to repeat during the match we just roll A slot number to the future Which is like a next number after the proof of work the last proof of work block So this makes a liner very nice numbering Just an idea. Maybe it's not the best But yeah, let's let's discuss maybe after this call All right, skips lots as it mentioned in back for the skips lots will get this thing to diverge anyway the robust blocks with no blocks in it at block execution block numbers will Be behind this behind the slot number eventually Now that slot numbers are much more like timestamps than block heights Yeah, unless we change the logic in in the clients. Yeah But I think of course this this would are the extra complexity But otherwise, I think it would be possible Of course with with complexity consequences to Just to handle this at all Which problem you're trying to solve the height is already embedded in the beacon block And if there's an index in the clients the map slots to block heights everything just works fine So I don't think there's a code problem. I think this is just from a user standpoint Users are going to end up incredibly confused when you have You know block number five slash Seven like what is that like for for users to be nice if there's an easy way When they're communicating with each other and when websites present data to them There's an easy way to identify. Oh, this is a consensus block or oh, this is an execution block Right. So like in a prefix to the hash might be Kind of like Michael was suggesting Yeah, I mean post-mode you are the you are the talking slots which contain You know, so that's that's how we number the consensus blocks Or you're talking height of execution blocks I mean the the place that this Hits it probably hurts. Yeah block explorers and things like that a bit But the biggest issue is going to be around the the json rpc client, which is is where execution clients So yeah, but the execution clients are really exposing this stuff And they have the backwards compatibility concerns from a beacon node perspective We're always going to talk in slots or probably have some apis that let you query by Execution height, but it potentially maps to multiple beacon blocks But all all the all the challenges in backwards compatibility and making this understandable to users is really going to be The execution clients so I wonder if if this is something to bring up tomorrow on Co-divs more than here Because they've got the context and the actual ability to do something about it Just so we've prepared for that call. Is it Terribly unreasonable or reasonable to at some point before the merge or on the merge To just move the block number forward by a very very large number Is it going to be really hard or is it easy? Personally, I would say that would be right when I say block meaning Consensus Slight number. Yeah, it's a slot number. So the the advantage right now is that you can take a time and calculate the slot And it just works. You know, you need to know the genesis time But it's a simple division If you change it, you've got a simple division prior to this number and then a bunch of slots that don't exist at all it's weird and then A simple division from a different genesis route so kind of At some point we may be wind up doing that if we ever change the the slot time But if we can put it off as long as possible, it'd be really nice Yeah, I would also say that skipping forward the spots is is is going to be really complicated The eth through spec relies a lot on the idea of the current epoch in the previous epoch epoch being 32 slots And to have gaps in that is going to be super edge casey I can just imagine heaps of clients all over the place finding bugs in that Every time you subtract one from the slot Um, now you need to go and make sure you need to stop subtract one or five million or something So I'd say that's a bit bit complicated in my opinion It's got sorts the whole federated registry has references to slots multiple per federated data for their lifecycle data And so if a federated data tries to accept they suddenly get into some very weird stance So I'm hearing very hard Maybe we can have a two numbers actually the one is like a classical execution layer eight or something And another one is a consensus slots This Maybe would not basically this would not break anything there on on each end as it doesn't interfere with each other What do you think? So you're saying just when you expose the slot number to the rest of the world Do you just like you know add five million to it but for internal communication you're using the real slot number? No, I would say maybe we keep as a as a public number we keep the execution layer number Which is current poof of work But internally We have these slot numbers, which we just We may show it somewhere in the explorers or or something like that, but the actual block numbers would be Just as it is now in the poof of work. So so there will not be any gaps any I mean for for the execution layer There will not be any gaps. It will just proceed, but At some point during the merger There will be a two numbers. One number will be execution layer Exhibition layer number and another one will be so so there will be peers every time And another number is a slot number of consensus layer The execution client Execution payload will still have the the incrementing block number Yeah execution environment will still execute so that that separation still exists You've still got slot and execution block number That just flows through it's just about avoiding confusion of which one you're specifying Um, which I think really just comes down to which api you're talking to if you're talking to the execution client Then you're talking in execution block numbers if you're talking to the beacon node you're talking in slots Yeah, but as Yeah, go ahead As the api's evolve we we may get into points where that becomes more confusing But then we can kind of design the api's to take either or and specify which type it is and that kind of thing Yeah for users, I I believe that They probably should see one number as a main number And it would be great as that that it It is just a combination of of a solution layer Numbers Well, I think this is an interesting topic Overall, yeah I don't think we'll get to one number because slot and height are always going to be two different numbers They just don't increment at the same speed um Today in the consensus chain, we don't really track height at all We just use slot, but it becomes more important the execution thing um I suspect we might have Covered this as much as we possibly can given where the beacon that side of this um This like I said, there's there's going to be confusion and problems. It's going to come up with the jason api c apis um So I wonder if we If we need to spend more time on this here or if we just shelve it and see what what the execution guys want Yeah, we can move on. I mean it kind of sounds like uh, just having a pair of these numbers and just being clear about Sort of the types of what you're referring to is the simplest path forward and We can leave it open for uh, you know experimentation with other options So on that note, uh, does anyone else have anything else? Um, yeah, I wanted to mention, um, I have been talking about a crawler and a dashboard a few times before and I wanted to share it with you today Mostly we show information about the geographical distribution of the nodes and also the different client distribution Which is not What We might expect, uh, I mean It could be better in terms of software distribution. Um, we are a couple of other things that you can see in the dashboard Um, and so also want to mention that, um, the press release for the standardization of the Metrics has been merged. Uh, thanks, uh, party for that. And so I would like to ask, uh, the different clients implementers to please, um Do the few changes that are necessary Uh, so that, uh, we have, um, metrics in standard metrics, uh, across our clients In a few couple of weeks if possible And yeah, I think that would be all. Thank you. Great. Thank you. It's very exciting to see this dashboard Thanks Okay, anything else from anyone? Otherwise, we can go ahead and, uh, call it early today Um, there's one thing I realized that when I look at the mainnet validated con is that it's slowly creeping up to be To be the same as the prater number So, um, we don't have to make a decision. Um, but I do think that we should increase the Validator count on the prater side Whether that's post a tear or before a tear But we don't have to make a decision today, but I just want to Notify people that I agree with that sentiment. Yeah. Thanks for bringing it up Terrence. It's, uh, definitely important to keep an eye on Just that we probably don't want to fork pure month and prater at the same time, but uh, we can probably get to prater, you know after You're not any final things. Otherwise, we will wrap it up Sounds like no. So thanks for joining everyone Again, very very excited to see all the progress and all tear and we will keep pushing on that And uh, yeah, I think that'll be it. Thanks everyone. Thank you Thank you Thanks everyone. Thanks