 This meeting is being recorded. Okay, we are now recording. Hi to the couple more people who just joined. Okay, so this is our fourth four eight four four breakout. We have a lot on the agenda today. Hopefully we can we can get get through it all. But at a high level, I want to discuss kind of where we're at with the implementations get in prison being the two main prototypes working on and I see some folks here who like signal they wanted to potentially work on other implementations as well. So talk about where the current ones are, where the potential blockers and what should we do next to go beyond the current DevNet we have. I know there was a lot of conversations also happening about the libraries to use for case a gene clients. So if you can get a quick update on that, that'd be great. And then Danny, you put out a talk about sync yesterday that was quite good. So you can probably discuss that. And the last thing I really want us to get to everything else just kind of a bonus, but it's a bunch of people from the community who want to contribute to this. So we can take a few minutes to kind of walk through what are some tasks and where people can be helpful. I think that'll be good. I don't know if we have time to do updates on the ceremony, that'll be great, but we may not get there. I guess to kick it off, Roberto or Mofi, do either of you want to give a quick update on where we're at with the current prison and guest implementations? I'll let Mofi take that since I've been out a few days. Of course. Yeah, sure. So where are we at? Yeah, so there have been a couple of spec changes, both in consensus and execution. For one, the fee market updates, we have like a fully fleshed out specification for how the fee market and gas pricing should work. And that is being implemented in our execution. The client, Geth, in this case, I have a PR open. I had like a PR open that was merged, but there were some bugs in an app like another PR to like fully flesh out and iron out the drinks there. This change does not include Ansgar's most recent updates. This changes like targeting what my client already like merged into the spec repo. That is to move the state from the EVM to the block header and have like a simple gas price targeting rule for the blobs. So that's currently in progress. We're also currently working on the corresponding change in consensus. In this case, you'll be the prison client. This work has been going on for some time now. It's taking a little bit longer because like we kind of like to be having a flamware in the Discord channel. There's some compatibility issues we need to be mindful of when implementing this change and integrating it with Geth. So those are the two main things we're currently working on. Got it. And I guess on the point of that second PR, Ansgar, I know it's been open for a while now. There's been like a lot of back and forth on it. Ansgar, yeah, you're on the call. Do you want to give us a quick update of where things are at there? Sure. So I'm not sure basically how many people have had a look. Basically, it's a I don't know, moderate size change to the fee market. Just basically introducing SSO in the EIP as it is right now. We used to want to basically charge and eat directly. Basically, like we had this floating gas price for blobs. And now we're moving. We want to move to a system where basically we introduce a second type of gas that after some back and forth, we want to call a data gas. And for now, blobs are the only thing that is being charged in data gas, basically. I think the PR is kind of mostly ready. There are some small questions still remaining around. Well, one big one is just that if people remember, there was this idea that we also wanted to bring 1529 over to a time-based targeting system away from a block-based targeting system so that we have constant throughput over time, even if they missed slots. I had an EIP there in the past and it turns out it's easier to do if we do this excess accounting that we want to do with for it for four. But there's still some open questions around do we want this to be per slot or do we want this to be per second? Because what if slot times change in the future? There are a few kind of attached questions around there just because ideally once we log a design in here, we also would want to move the main base fee 1559 mechanism over to this design as well in a later fork. So we have to make sure that it also works for the main 1559 mechanism and because the gas limit there is currently voted on every buck, it's a little bit more complicated. So again, so basically there's this whole kind of sub question around time-based targeting is still a little bit open. But besides that, I think it's yeah, it's mostly ready. And again, I don't think it's a big change from a conceptual point of view. Of course, implementation-wise, there are some tricky issues. But yeah, so I would expect this to be merged say next week. Got it. Yeah, I think right now this is probably the main blocker for like launching your next version of the DevNet. So in terms of implementation, I assume you'd only want to start kind of implementing the changes once they are merged. Because before then it's kind of tricky to rely on them. Especially if it's unclear whether you're going to go with a time or slot-based approach. I can imagine that changes the implementation quite a bit. Okay, makes sense. Would it be preferable then to maybe try and just merge this PR and then later on have a separate one for moving to time-based? Or would it be better to just resolve this first so that we then wouldn't have to change the implementation again? Okay. There is a stub in the implementation for testing. We could merge and say use the stub for this DevNet and have a warning on the other part. Oh, you mean by a stub in the implementation? How would that work? No, there's, Namaskar, isn't there like a section on quote for early implementations just do X? Oh, that's so yeah, but that's in the existing API already. That's not a new thing. And that's a constant price, right? Yeah, that's kind of like what we relied on for the first DevNet. But I think one of the goals we want for the second DevNet is to have like a more concrete representation of what the spec is. So we can start building tooling on top of it and start collecting meaningful metrics and having using stubs or something like that wouldn't be very useful for the second DevNet. Right. I guess if we're just balancing trying to ship in the next couple of weeks some sort of DevNet, then it seems like there's going to be uncertainty at least for the next five days on this gas market. So, you know, we should define the right trade off there. Yeah. I guess from an implementation perspective, Mofi, I'm curious like if we were to do it all block-based like the current PR kind of points to you and then switch it, you know, in like a few weeks to time-based, is that better? Because like we can move forward and at least have a second version of the DevNet or is it like going to be so much worth to then rip out all the block-based fee market that it's not worth it? It would certainly be way less work than the initial fee market update where we had to like change the payload and update both consensus and execution. So yeah, I think we could merge this or implement this now in the DevNet for the upcoming DevNet and then if we do move to like a slot-based gas pricing time, then that could easily be integrated with the updated DevNet and we can iterate from there. So yeah, it will be easier, I think. Sounds good. And I think there are like some partial changes that are unconferential like changing instead of charging one data gas per block. We wanted to go to something like basically one data gas per block byte or something just so that it's easier later on with the time-based changes, but I think that part is unconferential. So I could get the PR into a form where hopefully it can be merged pretty soon and then maybe have like a very small separate second PR a little bit later, if that seems like the most practical way to go then. Yeah. I think that makes sense. Yeah. And I think the time-based versus like slot-based might be something that like requires broader discussions. So like, yeah, I want to make sure that we don't like move to time-based and then there's some pushback by client teams for a reason or another and we've implemented all of that already across the different prototypes. Whereas like slot-based seems pretty uncontroversial. Right. That's my thinking as well, basically. Split up into the uncontroversial PR and then we have somewhat time to debate. Okay. Yeah. I think that sounds great. Any other thoughts, comments on that? When do you think we can iterate on the last couple of little changes in this pure on-scar? Yeah. I was basically holding off a little bit to get the time-based resolve, but then if we want to just basically fast-trick this PR first, then I'll go through whatever remaining open comments there later today and then we can hopefully get this merged by, I don't know, early next week with the latest. Cool. I guess we'll get this right right before DevCon. And then either during or right after DevCon, we can launch another DevNet using this. But and yeah, if anyone's like looking into starting an implementation or like continuing one of the existing prototypes, I think you can also just reference that PR. Okay. Anything else on that? Okay. Then there was another PR that got merged on the consensus spec by George about the reverse bit ordering. And I think the question was whether we want to include, oh, thanks. Yeah, whether we want to include this in the next version of the DevNet as well. Yeah. I forget what's the reason we were thinking why we might not include it. Mofi, do you remember? I think it was more of a not exactly cosmetic change, but it's a change we're making to make proto-dank sharding fully compatible or more compatible with full dank sharding. And this change tweets the KCG crypto a little bit to make that happen, but it's not critical for the for proto-dank sharding itself. Yeah, that's correct. But the changes are also extremely trivial in that I think you can implement everything by just reordering two constant arrays like the like branch setup and the roots of unity. So in a way, it is very easy to implement. But I mean, I agree you can test everything on for it for only without implementing this. Thanks for that caller. Okay. So does it make sense to hold off on implementing this in the next version of the DevNet? No, no strong objections. Sweet. And then on last bit on the implementation, I noticed a couple of people who like wanted to start looking at different client implementation. So we have geth and prism have prototypes now. But as we get more, that'll be super valuable because we can do some cross client testing. And parents, and I believe with the help of Danny, you've put together a really good CL implementers guide. I'll link this in the chat here. But parents, do you want to take a minute or two to kind of walk through that? Sure. Yeah. Hello, hello, everyone. So high level, wise, I haven't just been thinking about like, what does it mean for a client team to implement 4844, like at a very high level. So I basically break down the documentations into like several portions such as like storage requirement, like what, like, what is the storage increase? Because like, when we first started to be concerned, right, we were advertising people to get one terabyte SSD, and that's probably not gonna fly anymore with the merge and not with 4844. So that's something to consider. And also the networking requirement as well, just currently, we advertise like 10 megabytes symmetrical and recommended 20, 25 megabits per second. So how does that affect that area as well? And as of thinking, which I think Danny will cover that a little bit later, like what type of, what type of validations we should do for thinking, especially just right now, we can do forward thinking, we can do backward thinking, and then you have these like edge cases that you can have a block without a blob, or you can have a blob without a block. So how does that work basically? And then last but not least, like, like, how do we treat the fortress mode with without a blob? That's definitely something very interesting. Because right now, we have this notion of optimism mode, which means that the CL client can still paint the CL client to seem to head. But this doesn't make much sense with when a block doesn't have a blob. So that's something worth considering. Yep. So that's pretty much it. Take a look at the documentation and feel free to give feedback. Now, yeah, that's, that's a really good overview of all the bits. And if someone's, yeah, looking into another CL implementation, hopefully it's useful. Regarding the bandwidth concerns, is there anything we can do to, like, not have, not have every node consume roots of the number of PS bandwidth of the, for the block distribution? Are you talking about the gossip amplification factor? Yeah, exactly. It's not root on the consensus layer, like six to eight target, rather than that failing totally. But it's still seems very wasteful, giving the amount of data that we're not considering. Like, I mean, I think that certainly, I would say one megabyte, two megabyte is maybe untenable, given the gossip factor. Yeah, exactly. So do we have any ideas on how we could reduce this? So I guess that basically the two options are either you gossip to less people, or you gossip smaller things, right? And yeah, or you smaller things doesn't help if they have the same amplification factor, like whether you make it like one megabyte, or like 128 kilobytes times eight, like that doesn't change. We're thinking actually just having less of a payload max. But yeah, I mean, you can reduce the fan out or distribution factor, but then that potentially one increases gossip times and two, I think, begins to reduce the results. Are nodes aware of their own bandwidth? Like do they do they do clients or does P2P somehow know this? I do not believe so. Because one way, if nodes knew this information, right, then we could simply say nodes that know that they are on a low bandwidth connection, say less than 25 megabits per second, just reduce their outgoing amplification. And maybe they can also set a flag to their peers saying, hey, like, don't just send me the payload, like just give me a notification that is available. And and I'll ask for ask one peer for it. Right. I mean, that's my suspicion is that we have a very dense network of high bandwidth nodes, right? Like I would say like most nodes probably easily have 100 megabit or more. And quite a few will have gigabits. And so if we simply make it so that these nodes just distribute among themselves very quickly, then the other nodes can easily get it from them. I mean, that may not be like the perfect solution for like the ultimate sort of charting implementation, because it certainly introduces some centralization vectors among nodes. But like I think for four eight four four, which is kind of a temporary thing, it may just be good enough. Yeah. So I mean, in part in your statement was also, you know, push versus pull, essentially, like can some of these nodes pull it down rather than getting it gossiped? Exactly. And this should be the lower bandwidth nodes. And then we can still get extremely fast distribution peer to peer network, because all the high bandwidth nodes just gossip it as usual. And the others like with exceeding probability will have one of those high bandwidth nodes among their peers. So the easiest way to do so without doing deep changes to gossips up is essentially have the blob assuming that these are two separate network payloads having the blob sidecar be an optional topic. And that if you get a header, if you're not on the blob sidecar, and you get headers, or beacon blocks, then you then go and ask your peers for requests. And you actually know which peers to ask for the request, because you know which peers are advertising that they're on that topic. You can put it essentially work, you know, there's trade us here. And it's kind of a hack. And, you know, how do users configure that value? And then you're kind of like shifting the honesty of like the healthy mesh and that kind of stuff. But So the one thing to notice that the blobs block validity is tied to the blobs now. And curious to know like, what were the average arguments if there were any to for the beacon blocks to like always like gossip them, rather than do like a poll based model as we have now. The beacon blocks. Yeah. So I mean, I guess in general, it's very important. This is a very important message that literally ever node gets. There is a poll based backup with the gossip sub chatter I want I have. But it's also just a product of kind of using the gossip sub stuff off the shelf. You know, it's primarily a gossip protocol. Gotcha. So and yeah. So in terms of designing like push first pull, like you can always there are strategies where you're kind of like gossiping to some amount of peers, and you're chattering to others. And that's kind of what's happening here. You have to but because it's a high value distribution message, you have to gossip to some amount. Like you otherwise it becomes very slow, you know, you pretty much you're pretty much gossiping, but just in a slow method. Right. So I ask this because doesn't that rationale extend to blobs, given that we cannot like validate beacon blocks until the associated sidecar is available. Right. I mean, I mean, my argument here, the argument here that I bring is not that they shouldn't be gossiped. My argument is that it's enough to gossip it to some of the nodes. And the same, actually, I mean, I think I think if we wanted to, we could do the same thing for the beacon blocks. It's just less urgent because they tend to be smaller. Right. So there's like, I think the current system in my opinion is very wasteful. Like, I mean, they're definitely like, I mean, given more engineering work, we could build a much better system for this. I think that has much less overhead. But some of the overhead here is also in resilience to attack. Like that, like that's part of the gusset. Redundancy is not always bad. Yeah. But no, but well, but I mean, this is like, I mean, there are better ways to achieve redundancy than just sending copies. Right. So basically a razor coding. So I think there are much better systems than what we have now. The system we have now is just very simple. The better system would require more engineering work for sure. Setting that aside, Mofi, I think it is very important that both these messages are widely gossiped the network. The quote, the data availability check is get all of that data. I think the intuition here is if the strategy becomes partial push versus pull rather than just full push or full pull, then maybe there's a, there's still a distribution time in that tradeoff there that still is reasonable. It's like, if 50% of the network is pushing and have waste and 50% is pulling and thus is a little bit slower, we might still be able to get a distribution model that's like sub that four seconds that we're good with. But that's just a hope on that tradeoff space. Gotcha. Thanks. Is there a way to to only tell one of your peers that you're subscribed to a subnet? So same. I know that I'm a low bandwidth node. I still want to get the blobs, but I don't want to get a lot of copies from them. So I only tell one of my peers or maybe five of my peers. So the expectation is I get one block that I'm subscribed to this. I mean, it's a bit hacky, but that might also, huh? I mean, off the shelf, probably no. Yeah. You know, you could imagine some more, some additional configurable parameter that wouldn't be too difficult to get there, get into there. But then the kind of analysis of gossip stuff, all the attacks that it's presumably resilient against, you know, I think you would begin to degrade something there. What's, is this something we need to test somehow? Like obviously, we know we're introducing a bunch more requirements, the gossiping level, it seems unclear how much more we can introduce and what the effects of that are on the network. So this is something we definitely need to test. You know, if we know anything, we need to figure out what is the safe gossip distribution number, you know, what is the safe number for this? Or, you know, if we're not happy with that safe number, then we need to be making engineering changes. Pop on our end is opening up and beginning to do some simulation analysis and hopefully we'll have something, at least on the bare bones. If we assume X distribution of bandwidth and one megabyte, two megabyte blocks, then this is, this is what it looks like before DAVCON, but there's definitely some additional work to do here. This is like one of the things I'm most concerned about on current 4844 is that we don't know what the network can handle in terms of pushing this data around. The one megabyte, two megabyte safety assumption comes from Starkware's, I believe, Starkware's big red dot analysis from 2019 or 2020, where they pushed around large blocks and paid for them, large, relatively uncompressible blocks on mainnet and showed that the uncle rate was not greatly affected. That's the best we have right now. I'll share that. Yeah, by the way, I wanted to bring something, so maybe I will bring it now because it is somehow related. But I like, I'm not really comfortable because the fact that we don't know those numbers. And I feel that it has impacted the choice of KZG. And for example, if when I read the argumentation that says that we cannot use an alternative to KZG, the main argument is that either it won't be compatible with data availability sampling or it will consume more data. But I'm saying that the impact of KZG, the fact that it requires a thirsty setup is huge. So I think we should spend some time to do those analyses and see how much we can handle in terms of bandwidth and storage and stuff like that. And then decide if we can use another polynomial commitment scheme. I don't know if that's fair, but yeah. Can you name a concrete commitment scheme? Because we have done this analysis. That's been done for years, right? This is not a new idea. I mean, we have for a long time thought about starking mocker roots. We have sort of using Fry directly. Fry? Okay. They're all very far from being practical, like very far. Okay, very far. So it is very clear that we can choose them. And Dr, the IPAs are in order of magnitude larger? Yeah, IPAs are definitely quite a bit larger. And they also only give you, yeah, so proofs would be like several kilobytes, I think like around five to ten or so depending on which exact scheme you use. And there's lots of big problem in that the proofs, there's no efficient algorithm to compute the proofs. So that's a major downside as well, which we have for KCG. Yeah, so IPAs would also only solve the trusted setup problem and not be post-quantum. So to me personally, the trusted setup is a much smaller problem than it not being post-quantum. Um, somewhat related to bandwidth concerns, I'm working on setting up sort of like a community cluster for observability around the 4844 test net. So I'm essentially only running it on my nodes, but I am measuring a bunch of infrastructure metrics. I can also add network metrics to that. And then hopefully some point next week, I can give broader access to some dashboards and things like that. So at the very least, we have some baseline for what the current test net is using. The only concern, and I guess question I have right now is, how good of a signal are these metrics from a test net, considering the test net is fairly small right now? Sorry, if you were to run like large blocks on a test net. Yeah. I just, I think one, they're small and two, the distribution of nodes is not necessarily reflect that of main net. Main net might have one way more highly powered nodes and way more home nodes. You know, like even people that are running on test nets for that are home stakers might not, might be using cloud instance, because one, they don't care about the security of that. And two, you know, they don't want to overload their local bandwidth. So like, I don't think it's super representative. And then when we get into, doesn't mean that it's not worth doing the experiments, but we just have to try to actually, but given, you know, and then we go into simulations or anything like that. And then we're going to be guessing the distribution of what nodes look like. We can certainly do some worst case, let's say there's 10,000 nodes in there, 10 megabits per second and see what happens. And you can also do some pen and paper analysis, but it's hard. Can you remotely analyze the upstream of nodes somehow? I'm not 100% sure. No, you can do, you can like try to measure late round trip latency, but I don't think that really fits you. Yeah, I just was just wondering if you're basically just dose each node for like a second and see how much you can get through or something. Yeah. So one of the related things that was being discussed in charted data was writing sort of like a spammer tools to just spam blob, blob data to a node. And the primary reason I'm setting up some of this observability stuff is so that I can spam my own node and measure performance that way. And some other analysis later on, but just for this, like, I don't know, we could set up a spammer to do like a certain size of blob at a certain frequency on just a single node and see if we can extrapolate anything from the metrics we see from that. But I'm not sure, again, how accurate that would be. I think that'll be helpful for a different reason though, because you want the throughput on the network to be like well below what a node can handle in the worst case. So when you think about, I think, spamming a single node, if you know that your node can process, I don't know, 20 megabyte blobs per slots and we're considering going with two, then at least we know we're safe on that front. But if at three megabyte blobs per slot, your node has issues like staying in sync or anything else, I think that's really helpful to know. So it's different from the gossiping, but it tells you can your node actually process the amount of stuff that it's receiving on mainnet with a really large error bar or margin of safety, basically, yeah. Another potential idea that Donka and some others were discussing was assuming we stay in a low gas paradigm, maybe even on the weekends doing some sort of analysis, you could potentially abuse call data in a way like Starkware did a couple of years ago and send large blocks on mainnet, but potentially do a better analysis. So maybe have some sentry nodes, see the different timing, maybe go to various client teams and other operators and get logs of when things were received and try better data on mainnet. I mean, the best possible metric you can get is actually just looking at a station. So I would argue that setting up a queue nodes that just watch all the attestation subnets and see the delay for each validator of their attestations can give you a lot of information already, because it basically tells you, at least for all the staking nodes, how well they are doing at processing those blocks. So we'd probably want to know when random nodes around the network get the blocks, when random networks around the network get attestations and watch chain data for blocks and attestations. Would 1559 make this harder to do? I mean, we need to do it for like 10 blocks or something, right? Okay, so your gas fraction will just go up like a little bit. What does 10 blocks do? I think it's a bit more than, maybe 4x. Yeah, a bit more than 2x, yeah. But it's reasonable, it's not 40x, yeah, yeah. It'd be a good experiment. The thing is, if the experiment doesn't do well, it's hard to iterate on that experiment, but it could at least give us some information and know kind of which direction to go in. Yeah, I think that would be actually a really cool thing to do and do we know how much the stock, I mean, Abdel, if you could find out how much it costs? From the gas cost, we know exactly how much it's going to cost, right? Yeah, actually, yeah, yeah. It's very easy. Just need to find out. Yeah, and you can have the experiment wait until some minimum gas cost to start. Exactly. So I have a follow-up question. Can we leverage the efficiency of KZG to improve the data availability guarantees on the protocol? I mean, like, for example, Edo from Stackware, which is on the call, thought about a system where you could do some random queries based on random data, like RONDA for example, and to do some random queries that will be included in the block to enforce. Because for the moment, we trust and we rely on the Honest Validators implementation, but can come to leverage the efficiency of KZG to, I don't know if that's clear, maybe Edo, you want to jump in and explain what you have in mind. Sorry, this is completely off topic at Fiat at the moment, but I'm happy to answer that question, private messages. Yeah, just because we have only 20 minutes, I would agree if we can move this to the Discord. Let's do that. Yeah, but yeah, I think it would be worth it to try and, back to the StarCore thing, if someone wants to look into how we could replicate and adapt it, I think that would be really valuable. So, I mean, creating those blocks is trivial, right? Like, I mean, I can create a script that will do this. The only thing that needs to happen is that someone needs to set up the instrumentation so that we actually get good data from it. I mean, we will get some on-chain data, just on-chain attestations, will already be pretty interesting, but it would be a bit wasteful not to have like some notes that simply record all the attestations and give us much more data. Yeah, so using sentry nodes or using the diversity of nodes that maybe client teams are already operating. I think there are also operators who already do this, like, who basically like watch the whole peer-to-peer network. So, just getting in contact with one of them, if they could simply like be part of the experiment and give us that data. And then have to do it ourselves. Yeah, what's the easy, like, if somebody wants to do this, like, what's the list of metrics they should be asking? We want, for a given node, when did they first see a block? When did they first see every single individual attestation that they got off the wire? And I think that's probably it, because then everything else is chain data, because then you want to see, were blocks orphaned and did you have high attestation inclusion rates? Yeah. Okay, so just first time to a block, first time to every attestation for each block. Yeah, I agree with you as well. It's best if these are, well, it's best if these are maybe fully connected nodes that see all attestations, because they're on all attestation subnets, but then all of a sudden that's potentially a biased node with respect to where they sit in the mesh, because they're so well-connected, but I don't know how much. That's coupled with many of those nodes across the world, I think would still be very, very good. And should we try and replicate low-ish bandwidth? Like, you could imagine doing this with, like, a 10 megabyte node at 25, 100 gig, 100 meg, and then a gig. Yeah, I mean if you're willing to not just use other people's nodes, but, yeah, provide our own sentry nodes, then I would do a distribution of sentry nodes. Yeah, basically we can do that pretty easily in our, from our main nodes. And then, I mean, I also have a few at home nodes I can set those up as well. I think we have the infrastructure to monitor those data already. Nice. But do you have a way to record it? Like, is that exist? Yeah, today we capture the arrival time for everything. Okay. And you have nodes that would be powerful enough to just subscribe to all gossip sub channels? Yeah, that's not hard either. We just have to basically upgrade the instance and then just add more peer and stuff. I mean, if you think that's easy for you to do, then I think like, yeah, I mean, that would be great. We should just do it. And like, who knows how long gas will be cheap. So like, let's do it soon. Yeah, and I guess for gas, like we can find Yeah, we can figure out the budget to pay for the gas. That's yeah. And okay. And then, yeah, the thing I was skimming, this darkware post and they said they did this over a range of like 6,000 blocks. Is that roughly the duration we'd want? I think we'd want like a burst or a handful of burst. I don't think we're going to be doing a 6,000 block test. 6,000 blocks is like 20 hours. It's like a day basically on proof of work blocks. Yeah. Yeah. No, this is like a, I think the shortest and most, most intensive burst possible is what gives us the most information. Right. Okay. Like we'd rather have 10, 2 megabytes blocks than 21 megabyte blocks. Got it. You know, and it's something that we can do this for a minute or two and then see what's going on and then if we want to do additional analysis, assuming gas prices stay low, we can. I guess we should also ask ourselves before, like, is there any chance that we might actually break it? Should we do like a one or two block test first and see like, Yes. Yes. I guess to find and break it. I don't know, worry that it would recover. Right. Yeah. Yeah. Those validators might want compensation for the misdata stations. Okay, we're going down hill quickly here. Okay. I guess Terrence, like on the prism side, is there anything like, like, do you need help from anyone else? Or is this something like your team can just set up and then we can get, you know, somebody else to work on just building the blocks and scheduling when the actual kind of test would happen. Right. I think a good place to start is just like a one-pager like a requirement. So we can put those on paper and then share it with the necessary party. And I think that's a good place to start. And once we have the one-pager, I think relatively it's pretty easy. Cool. Terrence, is it easy for others that are running prism infrastructure to get similar data? Well, I will probably publish like a branch with style modifications. So as long as they update to that branch, it should be fine. Yeah. And then one thing that would be neat is, yeah, if we maybe ask like, if we ask across other client teams, it would be good to sanity check. That's like at least another, if there's another client team for whom it's easy to get this data, you know, getting just two that's like roughly aligned, yeah, would at least me feel much more comfortable. Yeah. Prism might be like really well connected or really poorly connected, due to something that we're not aware of. So getting another. Yeah. Yeah. And I'm sure like we can find some other like node operator somewhere, like who's not a client team who wants to do this or already basically records all of this, you know, whether that's like a staking provider or like a team like block maybe ever like someone that like has highly connected mode. Yeah. Sweet. Yeah. No, this was this was really good. Is there someone who feels like they want to take on writing this one pager of requirements and sharing this with the group here? I can help. I'll make a document right now and start filling those notes and share it here. Okay. Okay. Awesome. Thanks. Sweet. I think, yeah. So we spent, we have only like 10 minutes left. But I think this was, this was quite valuable. Oh, yeah. Make a slab. That's it. Yeah. That's a good one. Thanks, Terence. Okay. So yeah, we had a couple more things, but quickly on the KZG library side, are there any notable updates there that people wanted to share? I know there's the C, KZG effort that's going on. Thank God you maybe want to quickly give a quick update on that. Yeah. So Raman and I used, built on top of Ben Ettingen's work, CKZG, a library that has low-level implementations of all the functions necessary for, 404844. It's built on BLST. So it's all pretty fast and everything's a tendency. And yeah, I mean, right now we're just basically looking for a client or clients that actually want to use these functions so that we can build an API together basically like people where I think a little bit unhappy with the BLST APIs. So like, I think it would be worth like making something that actually works for clients and makes their life easy. So like if there's anyone on the scroll who says like right now we need to, we need a library for this or we need a faster library than what we have now, then we'll be great to connect. Nice. And I guess, yeah, Lofi on the get side, do you think that's needed right now? Possibly. So one thing I just recently realized from Terence's write-up of the implementation notes is the current implementation of computing the proof from the blobs is not quite as efficient as I would hope. And I was going to look into what we could do to optimize that once I've done with the DevNet, but I'm not sure if there are any further... So we have the proof computation implemented in CKCG and it's very well optimized. So you can use that. Yeah. The only thing is not yet, it's not parallelized. So but we could do that. Like there's a simple way to make it parallelized as well, but it wasn't a priority so far. Yeah. I'll definitely use it. But it uses like Pippenger for doing the multi-scaler multiplication. So it's it's quite fast. Yeah. Alex, I said in the chat, they're from Nethermine and they're looking for an implementation. So it might be needs to like, yeah, for them to use that to start. And so we can get some feedback on it that it might be easier to use it from the scratch than to swap whatever is a guess already. Cool. Yeah. Alex, I assume you're in the Discord and the Telegram is better for contacting you then Discord. Cool. Ah, sweet. Okay. And then next up, Danny, you had a document about sync coupling, which is something we've talked about for many weeks. Do you want to quickly recap? Yeah, I couldn't use it. I was just thinking about it the other day and wanted to jot down some notes. Pretty much there's two things. It's Gossip and Historic Sync. Gossip, we currently have the sidecar approach rather than coupling. Historic Sync, I think, is still to be defined or still being debated more. And there's two things that I think we want to minimize. It's the complexity of the change going into 4844 and then the potential complexity and the change going into full dank sharding. I make an argument that on the kind of gossip approach with the sidecar, this does mimic some of the potential problem that we're going to see and given the race condition between these two type of message types on gossip, because we will see it in full sharding because we're distributing rows and columns rather than the full blocks but you still have the kind of the same thing. But I also argue that the signature approach that we're using today doesn't actually really mirror what will happen in full dank sharding unless builders are bonded and they have a signature and they potentially be slashed. And so because of that, I do question on the gossip and I think it originally was arguing for decoupling as they are today. But I do question the value of the decouple now given that I don't think it maps directly to what gossip looks like in full dank sharding. That said, the decoupling does allow us for alternative push versus pull methods that we've been discussing today. So in that context, my argument, you know, I'm more that's kind of laying off the trade-offs and if we are going to be engineering some sort of push-first pull, we certainly would want them to be decoupled. Additionally, with the historic sync, we have these like blocks by range requests where you can request one or many blocks by a range or by a specific route. I think coupling here is bad because I think it kind of messes up a relatively robust mechanism by putting in a bunch of additional conditional logic, especially because the pruning depth of blobs is going to be different than the pruning depth of beacon blocks. And so now you have kind of this stuff you have to handle. Maybe there's not a route. Maybe there's not a blob. Maybe there's do you want the blob, like different kind of stuff. So I think it's easier to put it as an adjunct protocol with kind of these parallel methods rather than coupling. And if we coupled today, we would add that complexity and then we'd have to remove it in full dank sharding. So I think it's much more clear to me that the coupling on the historic sync requests does not add more complexity today and adds more complexity in the future. The coupling on the gossip, I could potentially go either way. I think that if we weren't doing any sort of sophisticated push pull, great. We probably should just have them coupled. I think it's much simpler and doesn't buy us too much in the future by doing the decoupling. But if we do want the push pull, we should keep them decoupled. Thank you for coming to my TED talk. Any thoughts on that? I guess like I'm thinking it makes sense. But I also think like if we do couple either distribution or peer-to-peer, we have to like, it has to be consistent. Otherwise, you end up with a case where you gossip like the block and for some reason the psycar doesn't the peer doesn't observe the psycar. And then the node has to like make a request for it. Would you rather have like the full coupled payload or just keep them separate? That's an argument. I think again like I call that quote historic. That's not quite historic retrieval but like that's also another argument for keeping them decoupled on historic retrieval just because you're more likely to get a beacon block than maybe this blob. And if you have one and you don't have the other, you're going to want to make a direct request and you don't necessarily want the thing you already have. So that's like a beacon that's a blobs by request or beacon by request. And so but I don't necessarily think that if gossip is coupled that you don't want to decouple the historic requests because decoupling the historic request kind of it seems independent. Even if you receive these things in tandem on gossip, no problem. If you're getting the historic request, you can you're saying specifically what do I what I want? And so it doesn't have the same issue of like information being missing. I don't think I was very clear in that response. So no, you said make sense. I'm just thinking it like through. Yeah, yeah. Yeah, that makes sense. And I guess just for clarity, like for the next version of the DevNet, we probably don't need to change what we're doing for sync. But then probably the one after that we would want to. Does that does that make sense? I apologize for my ignorance. Do we have these like signed blobs by range requests in the PDP spec right now? I don't think they're signed. I think they're just blobs. Okay. And I do put a note on like there's probably no hurt in putting making it a signed variant, but I don't know if it's actually that valuable to have them signed for the historic because of you don't necessarily even know the proposal idea when you're getting these historic blocks. So like you can't necessarily validate them independently. Whereas when you're in the head within a certain slot range, you do know the proposal idea. So you can pre-validate before you get the beacon block. I guess another thing I was thinking of the other day which is interesting because right now the blobs are not chained, right? They don't have the parent field. So for example, for block today when you backtrack syncing, you can just get the children. If the children is valid, then you can ensure all the assessors are valid but blobs don't have this property. And I wonder if it's useful to add a property in there. In the signed variant or just in blobs in general? Just in general where you can just say if the children is valid, then the parents must be valid. But for now you have to verify them one by one by one by one. Right. I don't know if you could actually shoehorn that into the commitment scheme. I see. I was thinking like hashing all the commitment and just made that as the parent route or something but that's probably a bad idea. Okay, we have one more now. I'm sorry, talk to you all soon. Yeah, thanks, Danny. Yeah, if some folks can stay on another five or so minutes, I think the last thing that would be important to cover is there's a bunch of people who like aren't part of client teams and want to contribute to this and it's kind of a first to have this many folks wanting to contribute. And the hard bit is I think finding what are like useful tasks that are pretty well-defined or that like no one is already on where they can have an impact. I guess I'll just open the floor here. Does anyone have like something they feel like would be really important that nobody's looking at and if somebody else could like pick it, it would make their life easier. Okay. So in terms of implementation, there are the obvious candidates around already. We have PrismGaF prototypes and then a PlayTouch prototype that was started during Berlin. In terms of tooling, there's a lot to build. So if you want to start smaller, I would recommend starting there. One of the things is more tooling to create blobs and insert transactions or integrate it into existing tooling like Foundry and just having some kind of explorer to view the blobs that are being confirmed on the DevNut would be really useful. Yeah, explorer to visualize the blobs that would be great. And then, yeah, in terms of the implementations themselves. So as I understand it, Prism and GaF are obviously the most advanced. Lighthouse, if we can get a link to the prototype, I'll keep track of that. Then Nethermine, Alexa, I said, they're starting to look into it. I believe Trang, who's on the call here, is going to start looking at an Aragon implementation as well. So I'll try and keep track of all of those. And I've put together a sort of checklist like we had for EIP 1559. If you start working on an implementation, you can just open a PR and link it there. And then if you have like issues on your implementation that you need help with, I think that's helpful because people can kind of go through that. The other bit I think that would still be really good is just testing. So we don't have, we have a little bit of consensus tests that I think Danny put together. I don't think we have anything on the execution layer yet, unless I've missed it. So if somebody is keen to like look at basically the Hive or the state tests and dive into that, that would be quite valuable. Anything else? And then also, and then we talked also about the sort of not basically the spamming of like a single node, book learner, I believe you were talking about that. I think that would still be quite valuable, like kind of getting performance metrics on one node that's being spammed by blobs and see if it stays in sync and whatnot. Yeah, I'm working on that. So I can continue working on it. If anyone else is interested, please reach out. I'd be happy to give you access to my node and stuff. Someone was saying something. Yeah, I was just going to say that I think like something that's more generic, that's generally always useful in these kinds of scenarios is writing either summaries or comparisons of like open issues. So if somebody was interested in contributing, I think it'd be cool to write like just an overview of the different ideas, you know, maybe for a sync, like comparing two different things and just like laying out the conversation points that have been had in calls and on the various different threads that we've been discussing. And I think, yeah, Danny sort of done that for sync. I think the bit where it might be helpful is like for the KZG libraries. Yeah, just like looking at what's there and what the trade-offs are. I don't know if there's any other areas like that. Yes, that's good. I just wanted to add something that as we get like more client implementations, alternative tooling, it would help to have like test vectors against what we have in the spec so that everyone is like on the same page of like what certain outfits should look like. So for example, like I think, who was it? Marius brought up like a bug in our implementation of the SSC route for the newly updated beacon block. And I think like there was a mismatch between theirs and ours and it turns out there was a bug in our fork of prism. And with the test vector, it would be easy to like cross-check what we're doing was correct. Got it. And what's the right like format for those test vectors? Just like JSON test or? Yeah, I think JSON works fine. Yeah, I know Marius had like the ones for M4. I can't find the link now, but yeah, I think if we, if we, you know, if I, I'll try and find the link to those and kind of share them as an example of what it looked like for the very early merge ones. Okay. And then Proto added some thoughts on the explorer. I'll copy all of your common Proto in the, in the notes for this call. Anything else in terms of tooling? Amofi, you have like this blob utils repo. Is there anything there that, you know, you've been needing to do, but just never got the chance? Um, well, sort of related is, so Proto has this PR in Prism to, so right now the only way to download blobs is if you're at the peer-to-peer network, but ideally, this should be done using the beacon API, even if it's like an internal API. So you can just talk to your beacon node directly and just download the blob that already has. I would like that PR merged. I haven't had the time to take a look at that. It'll be helpful if someone could, uh, I can link it in, but we could like polish it. I can rebase and polish it. I just need a target to test against. So it needs to be clear which branch to use and which doesn't. Okay. I'll stick with you offline then, Proto. Um, and I think the one other bit that's like, uh, that would be valuable is like, Mofi, we have like your DevNet guide, uh, for DevNet one. I think if, if someone like wants to polish that and make it, you know, like if somebody's like going through that, basically, and like stuff is, you know, not obvious or anything, like kind of extending that, I think over time, like making it easier and easier for people to like, join the DevNet, um, and, and, and like, you know, not have to run a bunch of custom commands or if they do, you know, knowing like what, like the failure modes are, it's really valuable. So, um, yeah, I'll link that as well. But just like documenting, if you're like playing around with this stuff and, and finding some edge cases or issues, like documenting what you, uh, yeah, what you did to make it work, so that the next person, it's slightly easier for them, is, is really valuable. Yep, yep, totally. Um, I think, yeah, a couple of people have like issues connecting to the DevNet and like a troubleshooting section should, uh, like easy way to like figure out your problem should be really helpful. Yeah, cool. Yeah, I think that was, that was worth having as a conversation. Um, anything else people think, uh, we need help with? Okay, if not, um, yeah, just, uh, as we're closing, I've put together the sort of checklist like we had for 1559. Uh, I mentioned it before, but like I've, I've just included the chat here. If you are working on the client implementation or start working on test vectors or whatnot, please, um, add your stuff there. And so other people will be able to see. I'll try and add all of the stuff we discussed and mentioned on this call, uh, to it, uh, today. So it's, it's, it's pretty up to date. And I think that'll be just like an easy place where we can track all the different things that are, that are going on. Um, and then with like less than a minute, uh, to go, uh, Trent, where's the best place for people who want updates on the whole KZG ceremony? There's a timeline document, uh, but generally there's a best place is the repo in, uh, ethereum slash KZG ceremony repo has a bunch of resources and a link to the timeline document. Cool. But the TLDR is that it will launch post dev con run for two months. And then there's a special contribution period and we have grants available whole bunch of stuff. If you want to make you an implementation or, uh, create a unique randomness generation, please reach out. Sounds good. Anything else before we close? Okay. Yeah. Thanks everyone for joining. Um, yeah. Fuck you all on discord. Bye. Bye. Thank you. Bye. Thank you.