 Okay, we are live. Welcome everyone to the second 4h44 breakout. I guess the goal for this call is just to kind of get everyone on the same page about the progress on the implementation on the case of G ceremony. And then take some time to chat about like what we see as the biggest blockers or like issues that we need to address on the IP. And also kind of try and list out like what are the types of skills that would be helpful to have people contribute to the IP. So like yesterday there were a bunch of people talking on Twitter about like how important this is. And I think, yeah, a few people already reached out but if there's a way we can just better articulate like what's needed and what's helpful. I think it'll help filter the different people who would like to help out. Yeah, that should be it. I guess to kick it off. Mofi and I don't know if Michael is on the call. I don't see him. But yeah, Mofi you want to start and give us an update on like the implementation and where things are out there. Yeah, you're muted. Yeah, it's a bit, it's a bit quiet, but we can hear you. No doubt, but there's opposition. And this is something that you're speed up. And Yeah, it's, it's a bit. All right, let me try this. Oh, this is actually perfect. Whatever you just did work. Yeah. Is it better now? Yeah, much better. Okay, I'll take it. All right, so yeah, we're currently, we have like most of the spec now now and other than the couple open issues that we need to resolve. Right now we're working on just optimizing the implementation, making it as fast as possible. And the point of contention there is the KCG blob verification. There's like an open issue where we want to ensure that verifying blobs is an adult specter so there's been some work that you put into the spec and the implementation to speed that up. And yeah, that's mostly where we are right now. And also like a quick like prelude to announcement, we are working on a dev net that will be publicly available pretty soon so looking forward to having like external communities joining the network and testing things out because that's going to be really needed. Nice. Anyone have questions, comments, thoughts on that. I guess, do you have a link to like the repo that's you and or repos that you and Michael are working off to share here with with folks. Yeah. Yeah, yeah, sure. Awesome. By the way, Murphy, you asked some questions about the verification code and why it's so slow and all that stuff. I tried to answer last week. I hope my answer has made sense but if they didn't just like ask again and or we can do a call the two of us to figure out in more details how to optimize the code. Thanks, George. I did skim through them but I've been looking through it in detail but should have more time next week to take a closer look at that. Sweet responses like a public doctor. Anyone can look at. I think it was in the chart. I think it was in the charted data chats. Yeah. Oh, okay. Thanks. Maybe, maybe it's worth but the proof is saying because I also put it in chat just now but I'm like, because because we have this kind of 1559 light mechanism for Bob's as well we kind of we have a reasonably good understanding of the kind of frequency with which the transactions will will come in because the mem pool can be very small for them as well. So, so you'd only expect like to see one legitimate block transaction legitimate meaning that kind of the commitment actually matches the Bob sent with it. Coming in every few seconds. So the verification of the legitimate ones is not a problem at all like performance wise it's really about handling if people spam you with transactions where the blocks just don't match the commitments because then you can't even charge them for it so it's like it's like an invalid signature. So it's really mostly about like peer scoring and making sure that you just don't allow one peer to send you multiple of those and. Yeah, so so so that's at the core of the dust issue. Got it. And as I understand it though there's just no peer scoring on the execution layer right like there's no. Yeah, there's no easy way. But you need to be able to verify them quickly because we there's no like granularity in the scoring either you stay with the peer you disconnect them. So, so you could disconnect appear but basically after they've tossed you an ideal you haven't gone down because of that. Yeah, ideally. Yeah. Okay, so just related to that. We do have like peer scoring in the consensus layer but there's like a weird issue where you sort of have to defer verification of block ACG in consensus. Whenever the block headers are not available. Right. And in that point at that in that case, if you are deferring block verification. It's much harder to penalize peers, if they do send invalid loss. It's it's it's sort of like you'd have to like keep track of like what peers associated with this blob and I imagine that's like that complicates implementation of various senses clients, at least that's what my experience has been implementing this in person. So, I mean, if the execution layer is not synced like you're in an optimistic sync mode or something like, yeah, which header did you mean. I'm referring to the, the blob site car in consensus client. So as that's being gossiped. It is possible that you receive a site card that's associated with the beacon block that hasn't been observed yet. In that case, you want to like defer processing of that block site car rather than just simply rejecting it because it might be incorrectly labeled as valid. So if you do defer that processing, then you need to keep track of what peers sent that sidecar in order to penalize it. And that's just one complexity with the implementation. Yeah, that makes sense. So obviously it's like, yeah, we want to make sure that verifying blocks is ideally just like not a dust vector, because it's very impractical on the EL and like somewhat impractical on the CL to deal with peers based on that. Well, we do do this on the CL like this, you know, if you get an attestation ahead of time, it's the same problem. So it's not impossible. It just probably is more complicated than like an MVP. Got it. Okay, that's good to know. And I see you have a comment about like the CL sink and the site cars. Does it make sense to like discuss this now. Yeah, I guess, I guess so actually, yeah, for the implementation stuff. Yeah, you want to take a minute and scar and kind of share your thoughts on that. Right, so I think this is basically just a question that a couple people had when we were discussing this so in Paris so basically I think for now the plan is to as I was saying just now as well to have this the site architecture where basically blocks are more less gossiped independently from from from the beacon blocks between CL clients. And that can lead to all these differences where sometimes you get a block and you haven't actually received the observed the beacon block yet and the other way around and everything and some concern I think I only heard that like second hand like I think Proto was saying that some CL client teams that kind of I'm not sure maybe. There were people that basically yeah raise some concern that this might introduce in complexity. So if you want to say like some some to that. And because, and so basically the, the conversation that we were having is whether or not it's actually worth introducing this extra complexity now of course they did the reason people did it in the first place or came up with this question in the first place is that it's more cleanly forward compatible with full sharding because basically then we could just drop that that whole kind of because they're like like in the future we'll have to have blocks and blocks be separated anyway because you know, clients will no longer download all the blocks. So it's cleaner to already have that separation today. But it does front load some of the extra complexity so if we want to really follow the strict minimum complexity approach for for it for for. And there is a case we made to just to return to something where basically you bundle the blocks and the blocks after all. And so that like whenever you receive a block it comes with a block and the other way around so there's never like this and there's no extra complexity around having one but not the other things like that. I think that's fair. So the short term of thanks sharding, you may need the separation short term maybe I think it's up to implementers make the right call. Mofi what do you think. Yeah, I think that bundling them will simplify this implementation ways to present. One concern I do kind of like have is. So the advantage of like kind of keep it separate is it makes it easy and quickly to drop invalid sidecars before we can observe them and what I mean by this is basically, if you observe like we can block that isn't valid. And you immediately later receive like the associated sidecar. You don't have to like you expensive value that you can just drop it immediately. And if we start bundling things. There is a network cost of like transmitting the whole gamut. And therefore, it's, I guess like is shifting the cost. There's like a cost involved with like always transmitting the entire big block and making sure that if it's invalid then you've already incurred that cost of like storing that you can walk momentarily which includes sidecar. And yeah I guess this issue can be solved with appropriate peer scoring. And maybe yeah maybe this is not a non issue but that's basically my only concern here with doing this. And Mofi, the current implementation does already separate the sidecars from the regular blocks correct. Yeah. Okay, so maybe we should not change it for the stability of this definitely done, and then take more time to consider whether or not we should merge the two things. I don't see a short term gain into merging them. Yeah, I would tend to agree with that and I feel like once we have maybe a dev networking and like kind of these other spec issues resolved we can also bring this up on the CL calls and like gets, and in the meantime also get CL devs to like look into it, assuming they have time which is a very generous assumption. But yeah I really like if the current version works right now, it's not worth refactoring the entire sink but it's worth noting that like, there might be a simpler approach and yeah discussing this with CL teams. Does that make sense to people. Yeah, sounds good. Any other thoughts, comments on the implementations generally or on sync. One thing to add. Right now, we have the prison prototype. This is forked by Mofi in one separate repo. And then there is this other Gav prototype with a fork from Michael House. So we have these two forks of consensus and exclusion clients that are that may have this distance in terms of kids differences from the latest merge work. And so if people from these clients are listening, I'd like to hear feedback about incorporating more of the latest merge work and whether or not it's the right time to start rebasing. Okay, yeah. I don't think there's anyone from get here and parents from prison told me he could join probably for the second half of the call so yeah when he joins, we can maybe ask him about that directly as well. Anything else on implementations. I don't know if we'll get to this later but there's the issue of The KCG libraries we have, we are using for the implementation. And using what's our KCG libraries. Yep. We are using for the. Like the blog verification we basically just be using a library and yes and now that we are. We also need some of that functionality in prison, the consensus. We sort of like have to like decide what's like the best approach to we're using the same functionalities across both implementations. So on that front, we are in contact with the last national people. I would say that there is progress but this progress is kind of slow. So like we sent them like that like over a month ago we sent them the requirements for what is needed. And they got back to us this week and they told us they're going to send us back some sort of report on what they gathered from what we sent them and that we should do a call next week to figure out next steps. So you know things are moving. And things will probably happen next week. So I'm going to report back with what I learned but also another thing I want to raise on this topic is that it might be a good idea to have some of the more people more involved in the implementations of this of these things involved in such future calls with the better idea of what is needed in terms of interface because you know like I think I know what is needed but maybe someone who is more involved with the actual stuff can give more insight. So I'm going to let you know next week of what happens but I might ask for some like volunteers to join in future such calls with them to build a better API basically. Cool. Yeah that's what you saw I know I'm areas from get had mentioned like he had some thoughts on that. So he's probably a good person to reach out to to join those calls. And beyond obviously like you know Mofi and Michael here as well but yeah he on the last one of these calls he seemed to have some pretty strong opinions about it. Yeah. Yeah. This is generally the people feel like blast is like it basically the best option is to adapt blast and make that better because I believe that's like what all the CL teams use already. But there's no kind of other option really on the table right now. So the things we need are a very thin wrapper around functionality that blast already has, I mean around functionality that any BLS library will implement already. So, since we're all using blast, it makes a lot of sense, just to put those in. Got it. Okay, anything else on the implementation. Any other questions. Sure. Okay. I guess, next up Trent I see you're here do you want to give a quick update on the BLS side of things. Sorry, the case of G side of things. Yeah I was going to say I could barely cover the case you just said I definitely can't cover BLS, but yeah. Similar to the since we started this we're just doing kind of the same stuff. We have an audit coming up for the ceremony implement early not the implementation but the design of the ceremony with sec bit coming up soon. We're preparing for that in a few weeks. I just shared a link to a bunch of resources which has linked the implement one of the implementations, but specifically their calls if anybody wants to catch up or is curious how far along we are. That's the main thing that we're preparing for the audit. And we have the next call next week on Thursday, 1130. Yeah, there's also a timeline dock in there. If anybody's curious about when we plan to start this, hopefully around DevCon, and then we'll have and we'll have a period of close contributions before that to test it. And then DevCon hopefully we'll have some live contributions from the audience and then it'll run for a few months. And then we'll have some people starting to work on a couple test sites. Jeff Lampert's been working on that to make sure all this stuff works, and we started working on an interface that will that user will actually interact with in the browser. So that should be everything any general questions that I can maybe answer. Okay, that's it. I just a quick question, I just curious like what is the size of the ceremony. The recipients. Yeah, hoping for 10,000, which would, depending on who you ask, it would make it the largest trusted setup ceremony. Nice. I did your questions on the ceremony. Okay. Hey, Terrence has joined. Oh, are you, can you hear us Terrence and do you have a mic? Yeah, sorry, I had another meeting but I am here so feel free to you. Yeah. Yeah, there's two I think there's two things that we discussed that like we're curious to get your input on. The first is around the CL sink earlier we were having a conversation that like we've decoupled blob sync from block sync to have it be kind of forward compatible with the full sharding approach. And that might introduce like more complexity at the CL side. And we were thinking that like, there might be value and potentially just recoupling blobs and blocks at the syncing level for like the first version of 4844 and then you know eventually making the this the sync more decoupled. So here is like, yeah, generally do you have any thoughts on that and like how, how much of a simplification it would be to like couple them now and like, is it first, and is it valuable to do it, or should we try and make it as much of like the sync design as possible. Right. We definitely had this conversation at PCC, which I remember, and I am in favor of the coupling approach. I'm not too worried about like trying to be the same as sharding in base zero. It's like, with a real dense sharding we need to have a hard fork anyway, so we can change it then it's not that big of a change, but it would be nice to just like I think we can definitely shoot 444 slightly faster just couple them together. It's less engineering challenge that way it's less implementation, it's also better UX. So I am, I am plenty for saying in favor of the coupling. Awesome. Um, and I, yeah, second question we had for you is the diffs between like the current prototype are starting to diverge from like master with the merge work. Yeah, and present and get when do you think it's like the right time to rebase this. Yeah, yeah, yeah, photo asked me to help I am so I think I should be free after a few days just trying to finish last minute met those related issues. So yeah, I should be free in a few days and then I'm more than happy to help just send it over the branch I can rebase it for you shouldn't take me more than a few hours. Oh, okay, well, nice. I'm sweet. I think those were the two things for her parents. Sweet. And I guess, um, yeah, the other thing I want to make sure we chat about is like, we have folks now like, obviously on the optimism side on Coinbase kind of working on this. This is like a pretty big IP. And there's obviously a bunch of folks like, sorry, so by this I mean the implementations. There's a bunch of other work as well. But um, yeah, there's obviously a lot of work to do to like get this implemented and tested in clients. And it seems like there's some interest by like the community to help and I guess I'm curious, like from like Coinbase and optimism, like, what like skill sets or like tasks do you think would be most helpful to have people help out with that are maybe a bit like independent from the work that you're doing or that like can be parallelized. If there's like engineers who have some time and like, yeah, experience that that can help here. I guess, yeah, one I'll start one that's really really useful once we have to get that running is just having users in the network. We have sales sorts of scenarios, sending blobs downloading blobs, ensuring that, you know, the, the current gas calculation sort of works in a dev environment. And, yeah, we just like to have more participants in the IP 44 test net. That would be super useful. And it would be people should just take a look at the codes on the various repos that I posted zoom. Maybe we can make these available somewhere like the community call agenda but take a look at the repo on see if like, we can improve test coverage, particularly in person, because a lot of the testing we're doing here is based on another repo that basically come on interrupts both get the prison for testing, but it would be nicer to have like more test coverage in prison to target specifics and arguments that ensure that you know the IP is as your most possible. Got it. And I guess, in terms of like, actually implementing things. I guess we have like kind of the coin bills coin base folks working on the get implementation you working on on the prism side. I see there's like a bunch of pine devs on the call like, do we think, oh, sorry, yeah. Yeah, do you think it makes sense to have like other implementations sooner rather than later or should our focus be like, let's get these two kind of as as far as possible, and then add some more. I think it makes sense to get as far as possible because we are still making changes to the spec, particularly the gas price update rule. We're probably going to have a discussion later right now on where how we're going to do that. Also, if we do decide to go ahead with bundling the block, you can block inside cars and that's like another change that other implementers will have to like do so it just makes sense like consolidate on all the changes that one and once we get to a point where we're sort of like the the spec is sort of stable then we can start introducing more implementations. Okay, that makes sense. Okay, and so basically, I guess the two main things now is just like testing on the dev net. As soon as that's out and then basically seeing if there's test coverage that can be approved in the in the current prison and get the implementations. Those would be like the two most useful things right. Yeah, but that said, I think if you know someone came along was an expert in a particular client we're not working on learning to get started, we wouldn't we certainly wouldn't stop them. Yeah, obviously. Yeah, and I guess would it be helpful. Like, if someone comes along and they're an expert in prison or guess, you know, is that also helpful to have more people working on those specific implementations, or is it just like too much people on the same kind of parts of the code. No, I think that would also be helpful. I think like two or three major items I foresee in like the next couple weeks where I'm back like two or three people can work on differently, but I like stepping each other's toes so yeah I think that will be helpful, having like experience prison or get those contributing to implementations. Great. So, um, so I guess if you're an experience get their prison dad listening. You can reach out like to me, or I guess, Liam you also posted about this yesterday. I'll put you on the spot here. Yeah, if you're if you're interested in contributing. And like if you're not sure where to start we linked. And notes for this call so we linked a bunch of stuff there. And then like the very like first place is probably either the dev net or looking at the specs and kind of diving deeper from there. Does that make sense. I'm sweet. Okay, so I guess yeah the last thing I wanted to cover today and I think it should be just write the time is just basically like our list of issues from the last time and we touched on some of these already but not, not all. On the case of G library section, you know we're still working on improving this on this. This will discuss the sink a fair bit. And I guess the last one is like the fee market. And I guess just to put some context here so right now the current dev net implementations use kind of the, the, the naïve fee market with like a hard coded hard coded gas price for blob all the time. This is not going to work. And there was a proposal in the EIP for just a more complex one that was basically uses EIP 1559 style pricing for for the fee market. On the last call we kind of discussed moving this from the from a special contract in the state to the block header. Yeah, and I guess I was curious to hear a from people like, you know, does this general fee markets just make sense that we think it's good enough to move forward and be. Does everyone agree to just having this in the in the block header is, is the way to go. Yeah, oh, and scar. Yeah, so I think kind of with regards to the header. I think basically everyone agreed that it might just be the more practical way to go for now. The only person disagreeing with Vitalik incidentally. And I think he's not on the call. Yeah, not on the call so, you know, for fitting is his voice here. I think on the mechanism itself. Generally kind of the mechanism proposed by the EP more or less works. The only reason why we kind of buy for a while now it's been a somewhat open research topic is just that there are things we would like to get that are not fully mechanism but they are more like nice to have so basically for one it's that while this works really well for something like blocks where demand is relatively slow moving, it wouldn't quite like perfectly be be generalizable because basically, sorry, stepping a step back like this would be the first time that we introduced like a two dimensional pricing mechanism, one dimension for Bob's one dimension for normal execution. Roller projects for why now I've been saying that they would really like to have like a standard standard for doing two dimensional pricing because they have to do that anyway because they have to price their two guys and their one transaction with inside one transaction basically, for now all rollups, basically hand roll their own mechanism for like two dimensional pricing. We would like for the fight for mechanism basically to be generalizable. The current version is not ideally generalizable just because, like, in, in that context kind of the two dimensions would be much more fluctuating and because they share the same gas that that might become a problem again, not a problem for blobs just the problem for generalizing the mechanism, and then also kind of similarly we would also ideally want this to be maximally forward compatible with like full on multi dimensional pricing further on the road, but I think on both of these counts. And it's a somewhat similar situation like we, when we're talking earlier about bundling blocks and blocks on the CL side, where we might just want to be practical and say we move forward with the minimum working version for now. And then, you know, we can always iterate on it later. So I think there's still some effort to try and maybe look into this all kind of compatibility with layer tools, just because they would really like that I think. You know, if we come up with a slightly alternative design within next month or so, that would include that I think, well, like all the better. But for now we can just, you know, we can just work on the basis that we have a mechanism that is good enough, basically. Sorry, that was a bit long, but I hope that made sense. That's quite useful. And yeah, I was going to be in you, because you have a bunch of comments on like clients PR. So, yeah, right. I was just about to mention that live client does have a PR open in the eps repository to update the old mechanism to a new mechanism that uses a header field instead of state. Otherwise, does not change anything about the previously proposed fee update mechanism. And I want to note here that this is not exactly the same as EP 1559. The adjustment works a little different. I think there are some subtle issues with this of that mechanism. And I'm not entirely sure what the right direction is to correct them with this block pricing problem. What is this balance we can make or is incentive, whether or not we want to prefer a burst of block data or a repeated small, smaller burst. So if we go over the targets, the gas price or the fee rises. This is incrementally more and more costly. And so small burst right now are more expensive than grouping all the blocks together, even though the total amount of throughputs after the end of the, the example is the same. And so I have this question. Do are we more concerned about bandwidth on the network and about the stability of the benefit, or are we more concerned about the processing. Because we have processing, I think it might actually be favorable to create this incentive for large burst of blocks, rather than this more stable amount of blocks. I think we care much about either of those. I thought what we care about is long term storage costs. Isn't that the dominant factor here by a pretty large margin. We have pruning so long term is really just a month worth of data. This is other issue of the current design of the fees. I'll give an example. If you exceed the target, then the price will go up. And then if for say a month or whatever the period is that blocks are returned. If you perfectly match the target, then you will eventually prune the excess, but the gas price will still be sticky and will still be high. So even after pruning after correcting it for a long period of and stabilizing it for a long period of time. The gas price is still high due to the old accepts. So I think the gas price update should consider pruning perhaps, and we should consider like the kind of characteristic that we want with the blob throughput. If you want like repeated small additions or infrequent large additions. What are the concerns. So sorry to briefly mention I think one of the concerns with on the pruning side was just that it might be not not ideal to basically specific retention like specific assumptions about retention periods in the pricing mechanism itself because otherwise this is basically just a client parameter where of course I don't know we like to give some defaults and some some recommendations but basically if you want to run a CL and just drop blocks after a week, you can do that or if you want to keep them for a year, you can do that. But with the moment we kind of have have some sort of like finite memory set in the pricing mechanism and of course we're starting to try and that other than that I think it's perfectly reasonable and it's also not not too complicated I think to do that. I think the matter. Sorry, just gonna say, I agree we probably shouldn't in trying some specific value, but we should price the fact that like they are like temporary, to some extent, right. And it's almost like, you don't want to enshrine like a week versus a month, but you also don't want the mechanism to like, even implicitly assumed they're going to be stored for a year. And that makes sense, because that kind of nudges clients to like not store them for a year, which is what we want, but it's, I agree you don't want to have like a hard coded cut off of like this many keybox or something. One approach could be to bias the pricing towards more recent throughput, so that older throughput is dampened. I think there are some balance here because otherwise the basically enter pricing blocks based on like very, very old throughput details, which might already be pruned and it's just makes pricing less accurate in my opinion. I think we can do better than that. That's exactly the same situation 1559 however I believe that the counter argument there is that's a kind of latent like remember memory of historic pricing is completely lost the noise in the real world. So in a theory in your theoretical scenario you had perfectly even throughput except for that one little spike and that one of the spike causes that to retain kind of remember the spike forever. But in the real world you are never going to get that perfect and as soon as you have any kind of variance that little tiny spike gets lost in the noise like right away I believe like I'd be very surprised to see to see like that that kind of that memory matter at all and any even kind of worst case scenario real world real world situation. This is, this is maybe like a dumb question but like can you just walk us through actually how the repricing occurs, like and how it differs from 1559. Yeah, like so 1559 is like you look at the gas in each block and you go up or down by like 12%. I do have my best interpretation. I do think there is just like a small inconsistency and explanation of the gas pricing in the EP currently. So might not be 100% correct about us. My interpretation is that we track the amount of blobs that have been confirmed since the start of the EP. And we track, or we can compute the, the targets the expected amount of blobs that we, that we would want. Now we take the minimum of those. So we know whether or not we are under a bloated target and say if we're over the target, we're going to adjust the prices upwards. If we're under the target, then we sorry if we're under the target, I think the current EP makes blobs very, very cheap. I don't, I'm not exactly sure if the EP is correct in this case. But let's just take the case where we are over the target. In the case that we're over the target, we use this, this exponential thing where the more we're over the target, the more the blobs will cost. I think there's a, there's a cap where if it goes below target, it'll take the maximum of target versus where we currently are. So never actually goes below target. That was my reading. I think, I think that's just basically, so it doesn't, so I think the, the pricing basically the difference between the pricing of the post pricing point for four and 1559 is that 1559 basically always does relative adjustments. So it, it doesn't care about the absolute value of the base, basically just says okay the block was under fall go down the book was over full go up. Whereas. So it's always like just, you know, it only looks at one last block whereas and four four four four does the exact opposite it has like this infinite time horizon where it just says, I want to always have half of the blob space filled. And I just keep track of historically like accumulating over all history. That was the percentage. And as long as the percentage was under is under 50% that basically blocks are free. And the moment we are over 50% then blobs basically costs something. And that price keeps like keeps going up. The further we are above 50% to basically until we at some point, you know, get pushed back down to to 50% or like, there could be some equilibrium where we know we are 51% or something. But now just very briefly saying like why does it not really matter that this is it has this long term memory and I think that's kind of also what Micah was was eluding to. And because of this mechanism we will always end up in a scenario where we are close to 50% we could be below 50% in the very early days when no one uses blocks but besides that we always be like in the 50 to 55% range or something like that right. And so, just because Bob's might have been more in demand in the past something doesn't really matter because it just means that this value will be at 50 between 1555% so that the worst cases that now the demand is only 50% and it or 51% and it used to be 55%. So it's like a 4% difference or something but that that really doesn't make a big difference and it washes out over time so so it, I agree that maybe it's still preferable to to to make that more explicit but there can't be a scenario in which like the historic accumulator is at like 90% or something because that's that's that, like, thing that that that kind of targeting was supposed to help against. Is that right that makes sense. In 15559, though, the, as we are adjusting downwards, there's more precision in adjusting downwards, whereas in EIP 4844, as soon as we're under the target, even by a little bit things start to become, I think, a little bit chaotic as the pricing is not accurate anymore. Yeah, that seems weird because you could imagine like, I don't know, there's no blobs for a week. That doesn't mean that like, we can then like process infinite or like a ton of blobs the week after, right. But, but it kind of does. So, so, so this is the idea is that because we have this maximum, that's only two x the average anyway, like we would be okay with the sustained. Oh, like, yeah. Okay, yeah. So we're okay. So the assumption we're making here is we're okay with a sustained full blobs for long periods of time, which is not an assumption 1559 makes. That is great. Yes. I think even if I give a counter argument against us. In your example, when there's a week of no data, and then a week of double the amount of data, then on efforts, there's no excess, but the, there's a bias towards recent data. So assuming there's pruning or no pruning, we might end up holding a lot more data due to this, this imbalance over time, right. So because pruning time was a week, now we're holding twice as much as I thought it was with normal pruning and normal throughput. Right. I think the, the, the assessment was just that basically this inefficiency is there, like you could basically just because I mean, in the long run, we don't expect this to be to really be the case much because you'd never be like for sustained period of time be below 50%, because at least in our assumption, there would always be some demand for blocks so that it would be used like before we get dipped down below 50%. But in the early days, it could definitely happen. And so we have this slight inefficiency that we basically have to be able to handle storing 2x the amount, the average amount for say a month or so, because there would have been an empty month and then a double month and so we basically have to store 2x. For that we gain the simplicity in the algorithm. So this is a trade-off. We could try and make the trade-off, the algorithm more complex and more sensitive. And then we don't have this 2x storage overhead in the worst case. Yeah, that's a choice. I think we are starting to basically cover some of the other problem with this choice between prioritizing many smaller amounts of blocks versus a few larger amount of blocks. If we have clarity about this part, like what kind of throughput maximum, like in a sustained manner is that we want to favor, then maybe we just also solve for the other problem. Yeah, what was the reason behind choosing this mechanism instead of the 1559 mechanism? Like what is the perceived advantage? They seem like they'd result in basically the same thing, but this one requires an extra header field. Wait, how does it require an extension? Because you have to keep track of how many blobs since Genesis. I'll give some details about the header fields and how it would look like if we emulate the 1559. So 1559 uses the parent information, the parent block base V, and then has this lag to update towards the new base V, validating that the base V update is correct. And it uses the total amount of gas that was used to do so. So this is the second header fields that is already available for regular gas to be able to do this update with two header fields from the parent block to get and compute the new base V of the next block. This EIP, we don't have such information that captures how many blobs were included in the previous block without having to make the full block available. Like the header data itself is not enough to get the right information to update a base V in the same way that EIP 1559 would do. So instead, this mechanism tracks just that information, the amount of blobs that have been included. And then instead of introducing this base V that needs to be updated, it computes it's computed just from the total amount of data that has been included by keeping track of not just of the last parent block, but of all of the total included blobs and then comparing it against a third target based on the block height difference and the number of blocks that should go into each block. So the short version of that be that if 1559 requires the transactions from the parent block, this does not require the equivalent of that, which would be the blob. So if we are going to exactly emulate EIP 1559, we would need to add two fields to the header, one to count the number of blobs and one to count the or to represent the base V for the blobs. Is it number of blobs already in there now? Effectively. Can you repeat that? Isn't the number of blobs already effectively in the block header? We keep track of that. Not total number, but number of blobs in the previous block. So as blobs right now, they are referenced by the blob transactions and the blob transactions are just part of the transaction list. So we only really have a hash of the transaction list, which doesn't really tell how many blobs there are. Is this formula written down in the EIP at the moment? The one based on total number of blobs for all time. Yeah. Yes, that is in the EIP and in the VR of Maths, you can find the header based version of that as opposed to reading it from the stats or link it to the charts. The gas price update rule in the EIP, is that correct? It's in there, yeah. Yeah, just because we're kind of basically hitting on time here. I feel like, yeah, the two, like this idea around like, yeah, short burst versus like long term history is something that we probably should get like client team feedbacks on and especially on the CL side, along with the sync design. That feels like the main, probably like thing here. I guess the other part like Ansgar you mentioned around like having L2s being able to use this as well as a pricing mechanism. It feels to me like once we kind of have the preference from the CL teams, that's maybe like the second thing to look at. And basically those are like the two most important things to figure out through the market. Does that make sense? Right, although just to clarify, this would not be on this question of what specific kind of, well, I guess it would also be relevant, like whether it would be short term or long term kind of stabilization mechanism. I guess they would favor, of course, the short term stabilization mechanism, but for the dates, it's much more about kind of how does the two dimensional pricing actually work. So the way the base, the ERP right now works is just basically just translates the variable price into like a variable amount of gas consumption, but then the gas is within the transaction is accounted as normal. That has some disadvantages that aren't really that relevant for 4x4, but they would be more relevant for roll-up. So basically if we wanted to make this kind of more roll-up compatible, that might mean we would have to slightly change the way the accounting works as well. Not just this design choice, but yeah. Yeah. I do feel like, yeah, go ahead. So, am I correct in that this is not adjusting the gas price, it's adjusting the gas cost, like the amount of gas that's used per blob? Yeah. Have you just said NASCAR? Yes. Oh, I see. Yeah, I'm not a fan of that, but I'm running out of time, so I won't complain too much right now. Yeah, yeah. But okay, yeah, that makes sense. I guess it's like, yeah, indeed, if you think of it as like the interloping constraints or something, I just want to make sure that like what we present as like the trade-off space for L2s is kind of what CL teams want to optimize for. Because like, yeah, it's kind of crucial that like CL teams are happy with this if we want it implemented on L1. And then, yeah, I guess beyond that, I guess getting the BLST additions that would be really helpful launching the DevNet and having people kind of look into that. And then finally, does it make sense to like already schedule another one of these calls or the people prefer to do this like async? Oh, Karen. Yeah, I wonder, sorry, I can't wait. So I wonder if this was discussed. Has there been any thoughts about having some sort of meta spec just just for me, I'm like looking at all the specs. And it's hard to know which one is the version that we're aiming for. So something like that would be nice. I think Proto has one, but I'm not sure how, oh, changed 30 minutes ago. So that must be. I'm adding links as they pop up to keep track of everything, but we do not have a versioning scheme for the EIP. So all these different resources are at varying stages of perverse. And we'll be discussing the executable spec for the execution layer on Valkor. Next week, if that's the type of stuff people are interested in. Sorry, George. No, I was just going to say that like this, this thing between the two specs right now is an actual like issue because with Xiaowei, we did the consensus specs for eight for four thing to be executable and that brought a bunch of like edits and and right now the two specs are pretty desynchronized in terms of the KZG stuff. And I've been waiting to make an EIP PR to bring it in sync, but I'm not sure when to do that. So that was another topic I want to raise in this call, but maybe we can do it on the next one like what's the best way to keep it to sync. Yeah. Is the reason for not updating the EIP regularly just because too much hassle and you wait until things are kind of hammered out and then update the IP or is there some other reason that the IP is lagging. Yeah, that's that's the reason that like it's like to like code duplication in the code base but to like change the second code duplicate I need to go through the whole PR process. So I was waiting to batch a bunch of stuff inside before I do so, but this is all related to the execution executable spec thing so maybe after the ACD we can do have a more like productive discussion about this stuff. I think yeah, I think this is like one of the best examples of like why our process is broken because anyways, like, yeah, and I know already over time but I think if you want to come, or in proto as well like on awkward devs to like kind of highlight that next week I think it would be good because I don't think this is the last time we have a feature that touches like both layers and yeah. Yep. Yeah, so yeah that'd be really helpful. Yeah, I guess if people want to set up the next call right now or do we want to do that a sync outside of this. In the time I guess looking just like roughly at the next couple weeks I think the time I would propose would be like Wednesday, August 17 at 14 UTC. So if everyone here is happy with that we can just put that now. Otherwise, we can just chat about it on the discord. So any objections to the 17th 1400 UTC. Okay, no objections. So I will see you all then. And yeah, let me share the notes in the chat here. I'll post them in the in the GitHub agenda as well. Yeah. Thanks everyone this is this is really good. Thank you. Thank you everyone. Sure.