 It is for a few hours so let's go ahead and get started. Thank you Alex for recording locally. Sorry that I am tethered to my phone. Here's the agenda. Nothing crazy client updates with a focus on Altair. We will discuss the point brought up by Adrian today on some of the subtleties on gossiping sync signatures. We'll talk about Altair planning. Thank you for Perry for joining to help us discuss and coordinate that. And then general discussion. The merge call was right before so I think we got all of that out of our system. But I think Proto might have a quick update on charting on the research and we'll leave the rest of stuff there at that point. Let's go ahead and get started. How about Techu kick us off. Sure. So we've updated to the Alpha 7 release with the new gossip. SIGID changes and the new rewards for sync committees. That's all up and running. We've kicked off Yerong Pili testnet with that. So the details are in the ETH2 testnet repo. P fixes to the Node Health API to make the book a bit better with Kubernetes in particular for us. We now say we're syncing on that API right after startup until we found peers. Obviously we had no peers so we had nothing to sync. So hey, we're in sync because we don't know anybody yet. So we're actually exposing that startup mode now through that API. That is the main thing. We've got a lot of fixes for discovery that have come out and a few tweaks to our gossip stuff. Mostly that's just Techu specific stuff but a lot of learnings out of the previous dev nets for our tech. I think that's us. Thank you. And thanks for putting up the new dev net. Prism. Hey guys, Terrence here. So we aligned to Alpha 7 passing spec tests. We're working on the new validator RBC endpoints, mainly the sync committee stuff. And almost done with that. And then we're also working on the network spec and that's almost done as well. So we're almost there. I think we're on track to start a local interrupt ish test net to test the fortran station by end of this week or early next week. And if that goes well, we will jump into the multi client test net with you guys as well. On the maintenance front. We are planning to release our e2 API support by next release. So that's very exciting. And other than that, just bug fixes. And then, yeah, that's it. Thank you. Great. I do the progress. Hi mommy here. So on the entire front, we also updated to Alpha 7. Besides that we still continue working on optimizing numbers and we made contribution to significantly accelerate everything that requires public keys from the database by not compressing the public keys. And we also improved or cash or state cash. Besides that, we are working on the validator client. We have a PR sitting there and we managed to control lighthouse with numbers validator client. So we are in good shape. And in parallel to that we are also adding more and more API endpoints to be in line with if to API requirements. Great. Thank you. Hey, so on the altar front, we are still on Alpha 6. We're in the process of upgrading to seven, but we've been fixing various performance issues and stability issues that we have come up just either during our own internal ephemeral test nets or trying to deal with Tech Goose DevNet. As of right now, it looks pretty good locally. We need to upgrade this Alpha 7 and see, see what it looks like. Other than that, I've been doing some other optimizations and things we were going to delete our pending attestation cash, because we found out it's not very DOS resistant and looking for feedback and ideas how to actually implement something like that. We stopped updating our fork choice head on every call. And now instead we're caching that result and then updating it only after processing every block. And then as far as like publishing a load star we've, we pulled out our API client into a separate package. It's pretty nice just to be able to query needs to end point, put that into a package and published it and also pulled out our light client prototype into a package. And we started releasing nightly builds and integrated all that into our Docker system so yeah. Nice. And on the light client has the prototype primarily on the server or on the client itself. It's a client that talks to a beacon note he's over like the rest API with a few additional pieces for sending over updates and proofs. Thank you. And lighthouse. Hello everyone Paul here. So with regard to Altair, we've got alpha seven merged into Altair branch as working well passing tests, we're following your own Pilly. We seem to be getting sync committee messages we had a couple of issues with the new configs. But we sorted those out and kind of yeah it seems to be working out we're looking into it to try and find some bugs and weirdness which I'm sure we'll find. We're also without we're patching an issue with every see we had some problems about signing just around the four boundary so we're going to fix that. We're still finishing some communication stuff and then we'll be working on merging our out there branch into a primary branch at some point. And out of out there now, last week we released version 1.4 point zero and it went quite well which is nice. We dropped our memory from like six gig to 1.5 gig dropped calls to F1 nodes by about 80% and the users seem very happy so which makes us happy. And moving forward we're going to be pushing forward with Altair and hopefully cutting a version 1.5 and release. We're going to be working into this month started next month, which should have some CPU savings and a bunch of other features. Excellent. Thank you. On this recent DevNet Adrian did it do the transition or did it start from Altair. Did the transition. That was some of the confusion I caused for like us it's a epoch 20. I forgot to enable it as we passed through epoch 10 when it was first going to happen. It's a great until you forget to set them. Okay, so you used mist where you intended and then switched it to 20. Yeah, yeah, so that's that's gone through the transition, which the previous ones did as well for just a epoch 10. Okay, great. Let's let's actually do planning and then we can talk about the same committee signature broadcast time and cash. Okay, so we're making progress. Adrian. Thank you. Thank you. Thank you. I think posting these dev nets up and doing some additional interop is is invaluable. I think that most of you have met Perotosh Perry who is at the EF who does primarily eat your DevOps related tasks so you've probably run into them with test nets and other things. Perry is going to be joining our calls here and getting his hands dirty with various test net things. So helping set dates helping pick and host configs and that kind of stuff and then maybe working on your problems if they arise like hitting epoch 10 and forgetting and then changing it to epoch 20. Perry, do you want to do a quick intro to yourself. Yeah, hi. Like Danny said my name is. And Cool. So I think the, the steps obviously look like short live dev nets, some longer live dev nets, and then picking dates on our two test nets and picking a date date aka epoch on main net. So in terms of, I think we'll do iteration on the short live dev nets. Adrian and or Perry can maybe pick and host another one or two over the next week or two, if that seems valuable. And then what's the current temperature on earliest for one of our large test networks. Is that a two week target is that a four week target is in something like PM on or Prada. Yeah, I would, I would think that two weeks is probably too early. Just looking broadly. I'd be definitely more in a four week kind of racket given those two options. Okay. And I think that's where I would fall on that as well. Terrence, do I see you on mute. Yeah, I should assume sentiment as Paul, I will say between two to four weeks, likely didn't force for it because I prefer more like local interrupt testing or small scale testing before that. Okay, so we are. Two weeks puts us at the first July, four weeks puts us at the 15th of July. So, let's state the intention of continued interop short live dev nets and maybe keep on running once a bunch of people joined it and do some testing just sending random transactions and things like that on the first which is two weeks. The intention would be to set a fork date for one of our large test nets. Maybe in the two week time horizon after that so heading towards the 15th of July. And maybe if we're comfortable at that point setting some targets for a main network. Well, we might get to the first be comfortable setting a test net fork, but then wanting to regroup on the 15th, which would be another call before we actually maybe will the intention would be to fork one of those, then to get to the 15th, and then set fork targets for if things have gone well for targets for the other test net information at that point. I think definitely in the August, early mid August horizon for an earliest main network. Does that all sound about what we're thinking in terms of this is reasonable targets this isn't too aggressive but we'll keep us moving. I think that's about right. I think the key thing we need to get on to now in terms of walking test nets is getting users used to the fact that it's coming and coming fairly soon and they will have relatively short notice to upgrade. Right, because otherwise it's going to go very badly when the chain splits. So we can, I guess, once, once we pick a main net target, I think we'll probably do a minimum from announcing that to the main net launch of four weeks, I think is is what is the minimum given off to non forks on the other side of this thing. So we'll write a blog post target release Monday, making it very clear that this is coming, and also talk with some of the estaker folks to make this is that to make sure that they've begun discussing that within their community as well. I have a question to everyone here. Is every team comfortable with us starting to report bugs crashes that are related to the consensus. The transitions of I'll say, I'll ask the question differently perhaps was not ready yet to receive crash reports that they would be confidentially, confidentially disclosed that right many. Yeah, absolutely. Absolutely. We'll usually pick one or two people from each team and the years with them directly and disclose those five things directly. Also usually inform Danny and or product of those. Right because there's a chance that something that crashes out there might be able to at least in a similar vein crash may not potentially. Yeah, exactly. I'll take it as you guys are happy for us to disclose some of those bugs so yeah might get a message from you over the next few days. I had another question. Our teams thinking about doing like a quick audit just for the deep that you guys have for a tear. This is something that I've thought about that haven't formally made a decision on so I don't thinking about it. I'm thinking about thinking about it. It's not that this is correct, but it's certainly not done on a per fork basis for most ethwon mainnet clients. That's not saying again that it is correct, but they rely heavily on testing and and test nets and assume that their architecture is generally correct and that they and those processes are the things that are going to best find consensus which is probably true from a to we got a perspective but extra eyes is never a bad thing. If you do intend to do that you need to knock on somebody's door immediately. As you're all probably well aware auditors are extremely backlogged. That's fair. He was just a thought. Yeah realistically I don't think you're probably happy to talk about it. On that front we've actually gone. Yeah, I was going to have respect been audited. You've been looking at it right yes. I have. I love finding bugs and things. The spec is not been audited. The spec was audited some extent quite a long time ago at the phaser respect freeze. I will note that about 10, maybe 20 times the amount of issues and bugs were reported outside of that audit process then from that audit process. Okay, Adrian would you mind discussing the introducing the issue that you brought up this morning. Yeah, so in the earlier tests we found that we were losing gossip messages basically the inclusion rate for sync committee signatures has been really low. With the message ID changes that jumped up to about 70% inclusion rate, but we were still missing a bunch and it turned out that it's because the network was functioning really well so we were producing a block every slot and then immediately producing a sync signature. And for both nodes actually in the network, they were managing to process sync signature before they actually imported the block. And so they wound up ignoring most of the signatures. And it's just the same race condition we've seen with attestations if you publish them when you first receive a block and other nodes don't have any, any casing say for later type behavior. You get this race condition between the block and the, the sync committee signature or the attestation actually being ready, causing, causing you to drop some. I've run an experiment this evening with delaying producing sync committee signatures to the four second mark in the slot. And we're not getting 100% participation yet, but we're missing entire subnets at a time so I think what's happening is that we're randomly not getting an aggregator. And to do one more log message to prove that but I'm pretty sure that we're actually getting perfect inclusion rate of signatures now, and we just sometimes don't aggregate. Essentially, that, that delay solves the problem. And we now just need to have a choice between whether we. I think there are three proposals on the table. One is just don't publish signatures early when you get a block always wait for the four second mark. And the other was that we introduce a case so you hold on to signatures if you don't have the block they're pointing to until the end of the slot. And the third was yes x suggestion that there's a random delay after you get the block between a quarter of a second and a second. There's a bunch of arguing over semantics but I think, I think we just picked one of those really. My go would be to go with the second and introduce a short live cash so that you can kind of have the dispersal of attestation messages not all being blasted exactly the same time and reuse the mechanism that protects that stations from from having the same issue. The cash, I think should be much more short lived on the order of slot, rather than I think on the order of an epoch. And we get to ideally reuse kind of the similar logic from that structure. Yeah, I mean that's the way I've been as well, partly because I already have the attestation case so plug it in relatively easily. It doesn't do that yet. That's obviously more work. So that's a consideration, and it just feels, it's a deterministic to deterministic solution you are never going to reject a signature because you don't have the block yet. But zooming in. It's possible to build a DOS resistant case. Well, I think so because you've got at most you're going to hold it for 12 seconds and at most you're going to hold 512 signatures because you ignore anything. It's not from this slot and you ignore anything that you've already seen from that validator. But you can get. Can't you get non you can't always verify the signatures of the things that you put into that case is that is that true. That is probably true. Can't you verify the signatures. You may not have that for. Yeah, if you, well if you don't have the block you don't have anything. Yeah, that's right. Yeah, so you get the block roots you in in a fork and therefore shuffling. Right, but there's shuffling I mean the shuffling is in the order of a day. That's true you could just try and verify it against your head and you'd be right 99.9% of the time. If you're rejected on a on a on a sync committee shuffling you're like want a way different fork you can probably drop it. I'm not saying the case is like definitely a bad idea I think that delay is also a good idea because I made an argument somewhere else but if we know that we're going to have to case these things and why make every note on the network case it rather than just the sender hold it for a little bit more. That's that's one argument and then that if we just have some delay, then you can get by without a case. And if the case isn't a perfect case like in this case where we can't always verify the signatures going into it then there's like you know that the case isn't a perfect deterministic solution. And we already have this problem with attestations like I think loads time at the start of it. How do you make the attestation case boss resistant you can't say with this one you can't so the attestation case is a bad idea but I probably say delay it indication. So I'd argue that the, this cash is a lot more dust resistant than the attestation cash because of the way the, I mean, the current sync committees been known for like an entire day before it becomes the same committee. So it's almost certainly finalized. But the, then the randomness. Why does the randomness look much different than this because if I wait 0.25 seconds, or on a random stretch between that like what does that actually buy me because I can still end up sending a signature to somebody who hasn't seen the block yet. I mean it's probably not thinking that it's only 512 it's probably doesn't earn you a whole lot, but just I guess looking from like a broader network perspective, you know, if you're going to, you've got a message right do you just to send. And you know that that every the receivers all need, like half a second before they can process it to you. Do you hold on the sender side or does every receiver hold it until they get it. Yeah, I'm not sure that that's as long as I know but that's that's what I was going with it. I'm also, it's also network latency is not just like the processing of the block into into the into the actual state which, unless you're telling me it's like almost that's almost always the dominant factor. It's a present factor. That one more question which is basically that losing an attestation. By and large doesn't matter. It'll get included somebody will have seen it and then likely to get included anyway. How bad is it to lose this in committee. Certainly less forgiving. What was that. It certainly a lot less forgiving, you've only got one slot to get in there are fewer aggregators that might be around. I can think that would it be worth to introduce some sort of forgiveness mechanism in general. I mean this was during the attestation discussions were discussing whether the attestation inclusion delay should be increased in order to allow for more leniency on block versus attestation timing. I mean it's an option it's a trade off in terms of the optimal latency to follow the chain as a light client. If you if you added a minimum inclusion delay by if it was to follow two slots instead of one slot then you'd have 24 seconds for your node to be able to follow the head rather than 12. One of the things that bits of data we're missing in this is that we don't have a decent size network that we're seeing how this actually behaves in the real world. So I've got two nodes that are right next to each other in AWS. It's not really surprising they're getting the block all at the same time it is all happening very fast. Whereas even on Piedmont and Prada we're seeing a much bigger distribution of when blocks arrive and take your publishers attestation straight away Lighthouse currently drops them if it doesn't know the block. It doesn't seem to be a problem attestation inclusion rates are pretty good. So on a bigger network, how big a problem this is and you know so you're not necessarily always needing to catch the signature. And a lot of these questions about how likely it is to get dropped are still unknowns effectively. I say if possible I'd like to solve this in the PDP spec, which can be done with minimal damage to moving towards a spec freeze, especially on the state transition side networks that have slightly different or nodes on the same network that have slightly different agreement on how they're handling this case would still be able to hang out and chat with each other. So, if possible, I'd like to solve it with one of the two or one of the suggestions that just touches the PDP rather than adding some sort of induced delay on the on the state transition side. If we want to gather more data that's I don't think it's critical that we conform on the solution immediately right now but we should probably pick something in the next five days. So, is the reason that we're not going to send them all the four second mark so that we don't pump the network. And if that is the case there's only 512 messages right so and it just seems appealing to me to say just send them all at four seconds and implement a case if you want as well. There's multiple reasons there one would be there is this, you do want attestations well propagated, so that aggregation can happen at the eight second mark. And so if you have the information that you need to send your message, the ideas that is optimization to send your message when it's available. Like when the correctness is available, and then that also so that you can potentially get aggregated at a higher success rate, but then that is also a suit does help stagger. So you don't have message blast. Yeah, okay, that's a good point. So, going back to the case again, saying that the shuffling is known a day ahead. I guess the problem that we're going to have is that if we get folks more than a day if the network goes unstable then we're going to. I guess the worst thing that happens is the case is going to fill up we're going to start dropping sync committee messages which is not too bad right. It is not too bad. And you've entered into somewhat of an extreme scenario and like clients might have issues then. But even then, even if you're not finalizing for an entire day, the assuming that there was actually two forks that are deep of an entire day and that those networks are actually those partitions the network are actually communicating and not resolving that fork is like a more extreme issue in and of itself. One thing to note is that the signature has the actual validator index. So you don't actually need the shuffling to be able to verify the signature anyway. You do need to do all of the validations like checking if they're in the sync committee. So, right, you're at worst bounded on the number of validators and. Oh, yeah. Yeah, proof as well isn't it. Yeah, yeah, you're right. So that does make it much easier to verify. It's a much bigger bound, but I think if you just checked, you know, if you assume you're within a day that your head sync committee is the same and publication if they're not in the one your head is on. I mean, is there a very effective case. Is there not a condition that says drop the message if they're not in the committee. I think there might be. Yeah, there is but you can't validate that without the shuffling. But I think it would be when you have a shuffling. If you don't have the block just use your head shuffling and case it if it if it's if they're in the committee and then check properly once you get their block but I think you're going to be fine. Yeah, I mean, as you say it's a string case. I'm going to go look at how that condition is written but I think the condition is should be written with respect to what you think the head is not with respect to the message that was sent in my opinion. But it's true because it has to match your head doesn't it. Once you've got the block on either. You don't know the parent. But yeah, that would be that would imply that it has to match the shuffling in your head. Okay, this issue was just uncovered a few hours ago, or at least since I was awake. Let's continue chatting about it offline and try to aim for a p to p related solution in the next few days. Otherwise, if we needed to get some better data on larger networks than we can take it there. Adrian you mentioned something as an aside that was interesting that you were not getting aggregators, you weren't always getting aggregators on different sections of the subnets. Yeah, I believe the number of aggregators was the target number aggregators is actually reduced. And that might be an error, not an error but it might not be a good thing so do you have any other data on that or should we, I can look at that number that target number and probabilities and we can chat offline if you don't have any other information. I think the key thing is to try it with a bigger set of our latest I've only got 2400. So I'm seeing duplicates in the scene. Okay, which we're not expecting and that makes it less likely to get an aggregator. Okay. But yeah, it is definitely worth reviewing those numbers. Yeah. Okay, we'll flag that. Okay. Many other alter discussions. Discussion points. Cool. Pretty inactive communication. I know everyone's slightly staggered on where they're at on this, but hopefully mid to late next week. More and more of us are communicating on some of these small documents. Let's shift into research updates. Anybody have anything to share. I can share something about sharding. Just that this merge call so I'm not sure if we have like research updates between alter and sharding. I think sharding is probably the main thing to share. Right. Okay. So we're currently at is this updated sharding spec with a new state format. So this makes it easier to track confirmation data and keeps it all in one place to make miracle proof to the commitment easier. Generally happy with that piece of refactoring where there is more. So we're thinking of changing the way. Shard proposers are tasked with their, their proposals. Currently the split sharp proposers up over split validators up in charge committees. And then out of these committees, we select the proposers to keep the proposers per shard separate for a time window of about a day. So this helps the network player, but within the spec, there's not really much to it. The problem though is that it adds this cashing complexity, and that we can, these incentives on a network player to stay on a topic for that long are relatively big. Meanwhile, we have this discussion about MFV and how we should organize the main net chain. And we have this concept of separation of block builders and block proposers. And we could do something similar on the sharding layer where it's the block builder that base a fee for the proposal for the to get the data there. And then the proposer selects data transaction, and they can select one without seeing the complete data. So we may even end up with a model where the proposers can select data can grab data, they don't have to learn all the layer tools they don't have to specialize as much. And then it's the builder that after paying the fee is then incentivized to make the data available by publishing it on the shard topic. So we're looking into this change in incentives that could work. It all fits in with regard to network timing and it does clean up various things. But of course, by changing this incentive, we need to carefully look at the change and see if it actually works. And then cut already noticed one possible issue. And if a possible fix. And we'll just create a PR to the specs repository to to further discuss these sharding change. Yeah, another thing I really like about moving in this type of direction is that the shard data transactions don't actually need to have the payload and so there's not an excess of data being gossiped around pre selection for for inclusion, just kind of like the commitment to that data and proof that you can pay a fee. And so I think that would actually greatly reduce the bandwidth requirements. Even in the event there's a competitive fee market or competitive landscape for getting transit data into the charts. Then there's this discussion about firewalling MFE. So making it open like flashboots, I think is really good is we should try and encourage every every validator to to participate in a way that all these incentives are even there's no validator with a lot more MFE or that has to do a lot of extra steps to special class. So if there's this market of builders, then those can specialize and become firewall that away from the protocol. And then it's basically the data, the data transaction that first offers the data and then later the builder publishes the data. So we shift this availability. So this is one of the parts the builder. And so this is good for privacy, since we don't have to. So it is like, if we have this very critical part of publishing the data, like, and this is a larger piece of data on the charts. And then we move that away from the validator from the consensus identity. And I mean it can still be the same person. But if a builder can publish this and has incentive to do this, then we can just separate it out. And then also, we remove this, this specialization need of validators. So whenever there's a new layer two, or whenever there's a new kind of depth that wants to use short data, they can just participate in this builder market instead. And the proposers, the sharp posters can just keep doing their task and don't have to worry about these, these kind of niche changes. So I produce going to be working on a PR that highlights some of these changes soon. I'm thinking about the after the match. So this is like sounds like this separate block builders sounds like an alternative to the execution services right. So just prepare the a lot being type a lot for the proposer and because there will need to be to sign off on it. Right. Basically, yes. So there, there is no combination of multiple data pieces of data on the proposer side. So we try and fit it in, but I don't think we should go into that direction. I said it's just much, much simpler. It's more minimal to this to do it this way. And then the builder can still combine different depths, they can still combine data and fit it together in one block. So thinking of like standard outside of the protocol, but something that steps would use, think of it like the ERC 20, unlike which kind of layer this lives on. And we basically just need users of short data to recognize which parts of the chart blob is being used for which protocol for which application. And so we need some kind of small header to say where to find data of specific dumps. So it's basically combination of offsets and some kind of ID. We can figure this out at a later time. We're not quite there yet. Yeah. Okay. So, yeah, for the execution service now it will not be an alternative because it just allows for building blocks, but testers will have to execute the payload anyway. So yeah. This is for sure data. So there's no execution, but yeah, yeah, yeah, right, right. As a product was just this mention in this separation that was recently published. Right, so it is similar to the main net separation in a way that we do have builders like this data is still as meaning. And the point is that the base port called layer one doesn't execute the data, but the builders will still want to execute their role of data and sequence transactions, whatever they want to do. And this is separate separate process from the sharp proposal role, which just needs to select data. We shift this incentive from of the availability towards the builder. And meanwhile, this cleans up some of the networking some of the concerns around specialization of block building and whatnot. And yeah, I think we'll just start with a PR and then we can discuss more. Other research updates. Yes. Just a quick comment. The, the work that I showed a couple of months ago about the resource consumption of the different clients will be published at the blockchain research conference in Paris in September. And there will be also a poster about the network crawler that we can be working on. And talking about the network crawler we develop a new version of the crawler that can run 24 seven gathered data continuously and charging a dashboard. So we're trying to release this dashboard into the coming weeks. And another thing is that with partners we finally have a first minimum set of metrics that are already existing across all the clients. And we are just in the process of deciding the best nomenclature best names for this metrics, so that we can have some first standard for the metrics across clients. That's it on my side. Got it. Thanks. Okay, any other research updates. Great. Anything else related to spec or any discussion points in general that we'd like to discuss before we close today. Okay. We will aim to get this P2P sync committee issue resolved in the coming days. And we also are continuing to increase test coverage for out there so we will expect a new very iterative release next week with additional tests then. Thank you. Really appreciate all the hard work and excited to see out there moving. Talk to you. Thanks for opening. Bye.