 Okay. Good morning everyone. Afternoon for Europeans. This is our fifth 444 call, lots to cover today. As always, bunch of spec updates. Then I think it's probably worth spending some time chatting about this large block testing that we've been doing. George has managed to send a few rounds of large transactions on Gordy and so we can chat about what we want to see next there. Chat about DevNet 3, how that's going. We were supposed to be launching next week, so are we still feeling like that's possible? And then I think last, if we have time, I know we're right in the middle of discussing Shanghai inclusions and what not right now. So it would be good to just make sure the readiness checklist is roughly up to date and just discuss how people feel about Shanghai. But yes, to start, I think the first thing spec-wise was what to do for blocks, which have no blobs. George had, oh, Kev, sorry, had a PR about this PR3093 in the CL specs. Kev, do you want to maybe just give us a quick recap of where things are out there? Hello. Yeah, so currently, I think we've all agreed on the strategy and the PR and we just need to know if clients actually are going to incorporate optional sidecars. If I remember correctly, George, I think even if they don't, we might still go through with this PR. Yeah, I think that's indeed the case. So I think a bit of background is that there were some bugs that appeared when an empty sidecar was given to the cryptography layer and we know how to fix the bug and the fixes will get incorporated regardless of whether the sidecar becomes optional or not. But I just want to understand why the sidecar is not optional when there are no block transactions. If this is something we did, if this is a good thing, maybe because it doesn't have any special blocks, if else condition. And if that's the case, if the sidecar becomes optional, then it means that we can be sure that anything given to the cryptography layer will have at least one commitment and one block and this might let us do a bit more defensive, have a bit more invariance in the code. But the cryptography PR will not be affected too much. I was just wondering what's the rationale behind the sidecar as not being optional? I mean, I would ask why make the sidecar is optional. Otherwise, if you make the sidecar as optional, you're going to have a bunch of places in the network and it's back where you're like, well, if you have a sidecar, do this. Whereas, you have two messages, it's easy to send two messages and one message will be empty if the commitments are empty and the other message. I think, I guess you end up having some logic hoisted somewhere to handle emptiness, but if you're going to put that in the cryptography layer regardless, then I'd say it's actually easier to have sidecars with everything. Okay, that makes sense. I mean, there's another argument which is currently the data available you check is, is there a sidecar for this block and does it validate? And in your case, we would have to make that logic also something like, does this block have any blobs? And if yes, is there a sidecar and if no, yeah, don't do anything. Yeah, so like, I mean, also not a huge difference, but we would have. Yeah, but it's pretty much like it just rather than just, hey, given the commitments and given the sidecar, does it validate? It becomes, you end up with preconditions to kind of even get into that logic. Yeah, I think, I mean, I think it makes perfect sense definitely to program it so that we can have empty ones. And we can then still have a discussion later on if for some reason it seems easier to make it optional, but as long as it's small, I think there's not really a strong reason for that. Sorry, so my take is I would prefer for emptiness and don't do optional unless we have to, just when you do optional, there's also additional complexity to implement for the Marshall and the Marshall layer. And yeah, it's just like, it's just like one more thing that could go around there. So that's just my preference. Okay, that makes sense. I mean, I was thinking that like, you know, I'm not sure if we're going with a decoupled block sidecar thing or not. But I was thinking that like, if it's decoupled, maybe it allows block verification to go through faster. But I think we have good consensus here. And that we don't do optional and mandatory and empty. And in this case, you can get the cryptography here, like probably tomorrow, very tomorrow. Nice. Okay. Yeah, Tim, there is another PR that got merged just as an update. I think it's 3097, which basically what it does is it makes the verify KZG proof interface be a bit more high level. So like accept bytes and not field elements, these remove some burden from the client devs when they're implementing the pre compile. We did that from feedback by Alexey. And there is another PR on the IT side, which like simplifies the pre compile to use this new interface. So that's good. Roberto suggested that maybe we do the pre compile interface of the KZG library even more high level and incorporate also the hash check. We didn't do that in this PR, but as more clients implement this thing. And as we get more feedback on what's the right interface, we might want to revamp the interface if needed. So yeah, let us know if you have feedback on the cryptography API. Other than that, I think we're at a good state right now. Did this 3097 that was in the spec release on Friday, right? It was it? Maybe. I'm not sure. I believe it was. Let me just confirm. Yeah, I think so. Is this a good thing or a bad thing? It's just context for implementers if they're targeting this release. It is out. It is in there. Yes. As was the validate blob side car, grassy condition, a number of other things. Okay. Anything else on either the empty blobs or a base on the KZG side? We had a question on the KZG library. Right now the field elements for blob is sort of hard coded in the CKZG library. And for the minimal spec on the continuous layer side, we have a different value than what is hard coded there. So maybe we want to make that like a variable rather than have it as a constant in the CKZG library itself. What is hard coded exactly? The field elements per blob, it is hard coded as 096. And the minimal preset for the content of spec, that value is set as 4. So like, I think it might be better to sort of have that value configurable. Also if we want to sort of say a benchmark, different values of field elements per blob or something like that. I see. I see what you mean. So you want this to be configurable on the CKZG side, like make it a compile time parameter or something. Yeah, something like that. Because right now like the trusted setup parameters also checks that whatever you get from the file is equal to the hard coded parameter in the CKZG. So like if you pass it anything which is not 4.0.96, it might, it will basically crash sort of because there's an assert in the load trusted setup thing. Okay. All right. I think that makes sense. I mean, we do want this to be smaller value in the minimal preset. So the library should support that. So I guess I will, I think Ramana is not in this code for whatever reason. So I can let him know that we want fields for whatever the constant be called to be parameterizable. Okay. That makes sense. Thank you. Thank you. And one more thing, like we were thinking about when we when we batch verify a range of blocks, we also sort of want to do the same thing for blocks like right now in Lighthouse we batch PLS verifies 64 signatures for 64 blocks at the same time. So like on the KCG verify aggregated KCG proof side, would it be as simple as just adding up like the individual block arrays from each block block site that we get and adding all the individual KCG proofs and then just passing it to the KCG library as is or would that be something different? Like would we have to do something additional on top of that? Oh, I was thinking of what Proto wrote in the comments, but what let's let's touch this second question. What are you saying exactly? You want to do more things out of the crypto layer before you pass it into the crypto layer? Or what did you say? I think he's saying that currently we have this verify function that works for one block and he wants the verify function to sort of work for multiple blocks. So you do like a batch verification? That would be faster because you have only you would need to compute just one pairing. I guess I'm not really sure. Where would this be used? When we sync basically on the consensus layer side, we get a range of blocks right now. So presumably they would be doing the same for blocks and blocks with the blocks and blocks by range method and we get 64 blocks at once. And instead of passing like it one by one, we want to pass it like batched basically. I see. So this could be like a helper only during sync, right? Yeah, sort of. Okay. And you have found that this is like the speed that during sync is a mixture that would benefit from such a batch verification. We haven't really tried it yet, but the thing is that we do the same for batch verifying BLS signatures that we know in like the current mainnet. Whenever we get blocks, we batch verify on the proposal signatures and the aggregate attestation signatures and stuff like that. So I thought that it could like be similarly faster if we do it in a batch instead of doing it one by one. Yeah, I mean it's definitely possible. I'm just wondering, like Roberto said, if it should be part of the library or it should be okay, maybe we can take this offline so we don't hold the meeting. But I agree that if it's taking you considerable time to sync, we could do some sort of batch verification to speed it up. I mean, we actually used to have one before we introduced the KZG proof technique a few like many months ago, we used to do batch verification. So it shouldn't be too hard to bring it back on. On your first comment, Proto said whether it's worth making the field thing parameterizable or just keeping it 4096 for all the presets. I don't have an opinion on this, but if you think that's a good idea, we could also do that. Proto? I think it's more shallow ways domain because PowerN was talking about the minimal specs having a smaller trusted setup than the main specs. Yes, I think we can do that, but just using the larger trusted setup is also slower in Python implementations. That's what I saw from the basic pie test. So we haven't generated many block verification tests so far, just like maybe only four or five basic tests. So I will need to try if we can do so in the CI test with the main set up for the CI usage. So I will try something locally and report it back later. I can see how 4096 will be quite slow in Python, and especially as we add more tests, it will get slower and slower. But do the tests, shall we? And if it's indeed quite much slower, we can also talk with Ramana and see. I don't expect that it would be that much hard to make it a compile time configurable thing if you're using some sort of build system. So I think we can do it that way, but I can ask Ramana to see what he thinks. Okay, anything else on this? Okay, the other open one, Terence, you had this issue about ancestor blob availability check that you opened a while back and still this has been sort of pending. Anything there you think we should discuss now? I think from the issue, there seems to be a good consensus on just what we should do cannot import, meaning that if we don't have the blobs ancestors that's up to 18 days, you cannot import the subsequent blob. And I think that just I think the rationale is it is easier to rationalize, because when you do optimistic thinking, you only do this for thinking part, but we don't need to sync here. So there's no point imported optimistically. So I think the next step is just to look at the spec and then further clarify it. So I'm not sure the spec today stays cannot import or stays can't import, but import optimistically. So that's just something to check the spec for. And I don't think this is like a blocker for like that net three, for example, but yes, it will be good to clarify that in the spec. Got it. Okay. Any comments on that? Okay, sweet. Next, Ansgar, you had some spec updates as well. The first one on the minimum gas price for blobs use team two have moved to say we should just just use one way and basically go with that. Any objections or thoughts there? Okay. So let's yeah, I mean, I'm still off the opinion. It's better to be opinionated here, because there's no reason to impose the cost of the network if there's no economic benefit. It's well, it's not a sustained cost, right? It's like can the network handle this load and it becomes kind of constant over some unit time rather than say building out, you know, expanding the blockchain or the state forever. So I don't see it quite as like that. We either should be able to handle that load or not. And I don't see it as like kind of a it is an increased fixed cost rather than like an increased sustained cost. So I'm not too worried about it. So I guess I don't know, does it just in like the spirit of trying to get this back to a spot where like it's, it's, it's pretty much finalized like, do we just leave Ansgar's PR open, launch the dev nets? And I don't know if later we want to have this argument, we can, but like, and maybe I don't know, once like all the kind teams have started implementing it, there'll be more to discuss. But I think for now, just to move quicker and be inclined to leave it at one and see if that, yeah, see if there's like strong objections there beyond background. I think this is a yes. Okay, and you have three merged PR Ansgar's, you want to give a quick update on each of them. So I think that modulus one is probably the both, the biggest one. Did we just say to close or to leave the PR open? So sorry, yeah, actually that wasn't here. The, I mean, I don't, so the spec right now uses the value one, right? Right, my preference would be to close the PR. I think we sometimes make the mistake of just trying to basically not make decisions until very late. And then that always just adds kind of uncertainty for people involved. I think given how people, how much people disagree here and that one, leaving it at one for minute launch is not an issue in any way, I would prefer to revisit this for the fork after and just have it in a place where we don't have to worry about it for now anymore. But I mean, if people really prefer to leave it open, I'm fine with that as well. No, I think we should just build it and we can always find it on GitHub. If we, if we want it, it's not like it's a complex like PR either. So if we want to refer back to the conversation, it'll still be there. Yeah, so that's, yeah, let's close that one. Do you want to give a quick update on this three other ones? So the modulus, the transaction blog broadcast and the fork behavior? Sure. So basically the idea was to, by the score, have all the kind of spec updates done and so all of these PRs are merged. So the one was, we talked about in the past, the pre-compire return values, we ended up deciding to pad the degree value after all so that it's also 30 or two bytes. The extra cost is so small and just in case there's some incompatibility. So it's just easier to do that. So that's, so that's merged. And then we have the mempool behavior clarification specifically. Now the spec requires clients to not auto broadcast for four transactions. There was a small question last week, whether there should just be recommendation or a mandatory requirement. And Marius pointed out that if you actually want to be able to restrict your own kind of bandwidth and because it's the amount of incoming for four transactions, basically it is necessary that you can actually kick peers that flood you with for four transactions. So it has to be a spec violation to do that. So that's why the spec now makes it mandatory that you no longer, that you just don't broadcast incoming block transactions. You only announce them. And then with the ETH68, the upcoming ETH68 version, you'll actually, as part of the announcement, also announce the transaction type and the size so that clients can make a more informed choice whether or not to request those transactions from you. It'll make transaction propagation slightly slower, but that's fine. So that's merged. And then the, one second, the third one is, ah, that was just a really small one. There should be common sense, but just the spec didn't actually clarify the behavior at the fork block itself specifically because part of the basically base calculation is that the parent header fields with the excess gas, but that of course doesn't exist at the fork block. So it's just now explicitly initialized at zero at the fork height. But yeah, it should be common sense. I assume that's how people already implemented it. That's all. So then from my side, basically, I don't see any, I don't have any kind of future spec updates that are still missing. Like from my side, the spec is physically in a good place. Awesome. Okay, Daniel, I just saw your comments. Do you want to take maybe a couple of minutes and we'll walk through your doc or just like kind of a brain dump? What you think we want out of this basically? And then we could probably go from there. Yeah. So I mean, the big thing that we want to see is how much at different data sizes under a reasonable amount of burst. So five or 10 blocks sustained that the chain and nodes continue to function as expected. In previous minor experiments, we just had the orphan rate, but we now have a lot more data on chain. So we first and foremost, kind of want to look at the orphan rate. We want to look at attestation inclusion and success rates, which become an indicator of how well validators are performing, which are nodes of quite a diverse type. And then we also want to understand if any degradation of that chain data is kind of random as to which validators are degrading depending on the slot or if it's a particular set. So maybe we see that 10% of validators are always kind of performing poorly at some data load size, which would indicate at some bandwidth or hardware, some sort of threshold. There there's beginning to be issues. And then additionally, we have the prism sentry nodes. It would be good to have some other node type, because we might have asymmetries on how prism performs or is connected in the graph versus others. But the sentry nodes are going to dump first arrival time of various messages. So blocks, attestations and aggregates. These, this will give us additional network data as to how these messages are being propagated. So you could imagine some low resource node in say Australia actually gets blocks like nine seconds late. But most validators maybe aren't of that type. So sentry nodes kind of complement the chain data. It'd be good to have a diversity in region and good to have a diversity in the requirements that they're facilitated with. Really what we're looking for is we want to know the norm on all of this. And we want to know deviation from the norm. And then we want to understand deviation from the norm with respect to some of our key thresholds, key timing thresholds. So you know, call it when things are deviating towards arrival times in like the three second mark, then we're kind of entering into the danger zone. So we want to understand, we want to do this on test nets. We don't expect crazy things to break on test nets, but if they do, that's a sign. And then we want to carry forward, you know, whatever the successful data thresholds were on test nets, we want to go to main net and observe this data. I think ultimately what we want to do is pick a number that functioned very happily on main net and maybe lower than that, given other simulation and pen and paper analysis. So that we're certainly in kind of a safe zone as we initially launched 4.44. I do have to run. Talk to you all soon. Maybe picking up from where Danny left off. If these are our goals, the current status quo is that we have a very simple script which submits a bunch of 128 kilobyte transactions. No weird behavior was observed, but 128 kilobyte is not expected to do anything, I would say. So right now I'm going to connect with FlashBots today probably on getting a builder, which has a bigger limit. I'm going to start spamming 520 and 1024 kilobyte transactions. Yeah. Nice. Yeah, I think that's useful. And I think if we can get even like, like even the 128 transactions, like I believe you managed to get 11 in a single block at the same time. So like, if we spam it. We are going to be submitting full block templates. We are the builder. So we're going to be eating up the entire block. And so far, the biggest one that we have gotten is like 11, 128 kilobyte transactions, 2 million gas each. And that thing probably was the biggest block for yearly. Yeah. Happy to refine in any way that people think is relevant. Terence? Yeah. So I guess I have a question for the consensus of their client teams here, like Lighthouse, Tentacool, Lowstar, and Nimbus. Do you guys capture attestation arrival latency? I guess probably not like in the DB, right? But do you capture that in like the matrix form? For example, like histogram? Yes, we do. Okay, that's the purpose. So yeah, so if you guys do then Perry and the DevOps team can also launch, you guys, us, like Sentry node, the more the better. I think Lighthouse DOS and Nimbus DOS too. Nice. Yeah. Nice. And I guess, is it realistic to expect the builder to be up and running like in the next day or two? I think it would be really neat if the four all-core devs Thursday, we could have had, you know, a couple. Yeah, yeah. Builder exists. Get up and I should have it today. Like that's what I've been told. I've been told that I should have it today. So if I have it today, we'll get there. If not. Okay. And I guess then the thing we need to make sure is we have some Sentry nodes up and running like today or tomorrow. Yeah. So I'll follow up in the telegram chat about this. But yeah. And maybe one thing that would be helpful. So Lighthouse and Loadstar, you mentioned you have all of these metrics. Do you mind sharing like just your docs page or like say we're sending this to people running nodes, like where they should look to configure the metrics correctly. If you can post it in the chat here, that would be super helpful. Sure. I can do an issue to write those. Sorry. Yeah. We will send a link. It's not ready now. Yeah. I can get something to go to. Okay. Great. Sweet. Anything else on this big blog testing? Okay. And yeah, if anyone wants to be on the telegram group, just send me a message and I'll make sure to add you. Maybe one parting thought on this because I also have a job. But in general, I mentioned that in the chat, but we should have a very high bar and very rigorously defined metrics. And I don't want to sound like a broken record or like bragging the party, but we should have, again, I think we should have, we have a doc already. Let's have a doc which has the checklist of the things that we really need. And let's like focus on these things and work backwards towards making the benchmarks successful. Just saying this is a process point to kind of like minimize round trips and for us to make this kind of like, you know, we're doing this to cover all the edge cases and we should be prepared for all the edge cases. So I'd love to see like a more systematic approach, at least in the future. Yeah, I agree. And I think, Danny, so I'm just looking over the changes that he did. So I think he's at least added since yesterday a lot of the numbers we're looking at. So like, you know, these like thresholds we want to make sure we're not exceeding. So I think that's like a good place to start, but I agree we can probably refine it beyond that. And I guess, yeah, and probably the best way also to affirm this is like if client teams have specific numbers or thresholds or whatnot that they are concerned about. This is, you know, I think the reason we're doing this is sort of to convince ourselves and all the client teams that this is sound. And so like, yeah, every stakeholder in this has a feedback loop that needs to be surfaced. And that feedback loop might be the latency, might be the CPU, the ingress, the egress, whatever. But it's like, ideally every client team would have like a list of like, here's what we need to be true for us to be a J with it. And then people work with it. And apparently, all of this may already exist. But you know, having it in one place as a single source of truth matters. Yeah, and you know, everybody having signed on it. Like ad hoc is good, but like, you know, you have to have things in one place. Anyway, sorry to rant. Yeah. No, yeah, I agree. Yeah. I happen to have a list with all the metrics from all the consensus clients about blog latency and decision latency. I put it on the chat. Awesome. Thanks. I'll add this in Danny's doc as well. Yeah. Okay. So anything else on this? Okay. Definitely three. Yeah. I guess how our different clients tracking the people still think launching late next week makes sense. Yeah. No, Roberto, I see just put your camera on. So I'll call on you. Yeah. I mean, I think we need a little more momentum on the client side to be honest. The good news is like as the, you know, the client API libraries are now pretty solid within Go KZG and CKZG has been working, making sure they interoperate nicely and, you know, barring the few edge cases and a few of the other details from the PRs that have been closed, just, you know, over the past day, it's all working pretty well. But as far as, you know, clients that are fully capable of interoperating right now within that interoperate pro, it's just depth, it's just prism and depth. Loadster, I think is close. I'd love to hear an update from the team working on that, the folks working on that, if one of them will get there. I myself am working on Aragon to get that up to snuff. I haven't heard a lot about what's going on with some of the other clients though. As far as prism goes, like I think we still need the version in the interoperate pro anyway, needs a couple updates, one being like I think right now it hasn't combined the beacon block and the sidecar and the same together. It's still doing the decouple thing. So I'd love to hear an update on that as well. But, you know, those are kind of my updates. Thanks. Alexa, then the appliance. Yeah, I kind of date on Loadster. So as of Friday, I would say optimistically, I completed the full implementation of the current spec. I was able to run with the interop, post the IP for it for fork, propose blocks with blobs and retrieve them on the P2P. But it's not passing the interop because there is some weird protocol issue that I have to debug. I think the implementation of the P2P to consume and assert the tests have some incompatibility with us. But working on it. Cool. Great. Let me know if I can help. Is Mofi on the call? All right, we can get a prism. Yes, yeah. I cannot from the mind side. Now we have like the three version compliant client, I believe, but with some bugs and no withdrawals yet merged like that. And the single question I have now is how we arrange the Docker compose with all the containers. Do you guys see an idea to add the big end nodes? Like for execution clients, more nodes or replace a guest for followers like that? What will be in this file? We want to make a pull request, but just wanted to clarify this question before. Like what will be a container set of the network? Yeah. Sorry, I'm not sure I followed the question. Yeah, interop repository contains Docker compose, which includes the big end nodes and the execution site, the data and so on. And we now need to update like add our clients, right? Yeah. Never mind. I believe some here side clients. So should we just add the additional beacon nodes for execution clients? Or we can use like we can replace guest as beacon, which is in the pair with one of followers like that. We want to, yeah. Any ideas? Oh, it's not clear. I mean, we need to be added to this network and what would you do? Yeah, I think ultimately we want to have to be able, we want to be able to fire up kind of mixes and matches of execution clients and consensus clients. You're right though, the Docker files that are in there do not support that yet. So, you know, I think we're open to suggestions on how to best do that. I don't have any firm ideas myself. I think this is related to that testing discussion, which is also on the agenda. And in particular, something that's Hive could support. And I will have some time next week to pick up more of the Hive testing to get some diversity in the testing and also enable to swap clients more easily for testing in order to have more. But maybe let's discuss async. I think Mofi also likes to discuss this, but he's sick and it works better. Yeah, but I think in the meantime, before we sort of figure out a full general solution, just submissions that are, you know, fire up specific clients independently of the other ones, just to run them through our existing and end tests and make sure they pass would be really helpful. Okay. So we will just make here just with additional beacon nodes of Lodster and Prism connected to our client and I'll let you judge what is the best way maybe any suggestions appear. Yeah, thanks. No more questions. Sounds great. So for Lighthouse, I'd say we're maybe a day away from like our full initial implementation. And then at that point, until the testnet will just be working on testing it out, trying to get it running, trying to get interop working. So I think we'll be ready. Do you want to squeeze the different execution clients like that? Sorry. Are you asking whether we plan to test with multiple execution clients? Yeah. Yeah, that would be the hope. I think initially, we'll just start with one, but probably initially get, but then maybe another one next. Just to clarify, are there any clients who have concerns about the cryptography or need stuff to be done there? We can make it work in Lighthouse. There's just some inefficiencies, I think for the time being, but that's okay. Yeah, I think from Tegu on this because we started working on the Java binding. It might be interested also the pressure team. So yeah, from the cryptography, I think we will have something in the coming days. Yeah, that's it. There have been concerns in the past around performance of the mempool during verification when many transactions arrive. And I wonder what the state of benchmarks there are, if any. I don't think we have any there, but yeah, I'd like to hear more about what the concerns are there. Well, I think Moffrey did some benchmarks. It was like five milliseconds to verify the blobs of a transaction. I don't know the entire transaction, but I think the cryptography part was five milliseconds. I know that also Ansgar has an argument on why we don't expect to see too many transactions under normal case on the mempool, but I don't know if you're wondering about malicious... What about the existence of adversaries? What's up, say that again, sorry. I'm just asking a question to be clear. I don't have any opinions here, but I understand that in the happy case, yeah, there's going to be few blobs of transactions might be fine, but I would urge you guys to think what happens when an adversaries finds an optimal number of transactions and abuses the fact that there's no batch verification done on a client. Right, I think one thing to consider here is the cryptography thing. Even if the cryptography verification took one millisecond, I feel like it doesn't make a difference adversarial scenarios, but if it's one millisecond or five milliseconds or three milliseconds, it's in the same order of magnitude and hence it's still the same. The question may be putting it in any very more granular format. How does the attacker's cost profile look like when they try to abuse it? If it's fine, it's fine, but that's the question, I guess. Can an attacker come up with a sequence of blobs that takes some time that makes it for MVV or whatever? I don't know. Just ask me. Right, so I think basically this is mostly a question about mental implementation, mental logic. Now that we have disabled broadcasts, I don't think it is a... This is not a problem that can bring down a node anymore, at least if the client is probably implemented and throttling for requesting blob transactions. But what it could do is, of course, if it's naively implemented, you can spam a P and then the people would have to stop processing blob transactions. So you could bring down the blob transaction propagation throughout the network. And so things that can be done here, and maybe a question would be, should we have a place to kind of talk about this, just specify this or something? Or should that be up to clients? But what you could do is, A, you can batch per peer verification, because ideally you'd want to disconnect from a peer if even a single verification fails. So you wouldn't have to do bifurcation to actually find out which of the transaction fails. You don't care if they send in one invalid one. Then that's it for them. And also ideally what you would have is you just have a per peer throttling, where basically you just the only request five blob transactions per peer per slot or something. And if you have some rudimentary logic like that, should all be fine. Again, because it's up for clients to implement, this is not part of the AP or the specification. So I'm just wondering, should there be some central place to discuss this? Or should there be just something left for clients to make calls on? One thought here, Ansgar, and I was re-meeting the AP yesterday, would be that just use some shoot master of logic or language? I don't know. But it feels like some guidance should be given there, because given the fact that we are having this conversation now and have the room kind of like when silent, when I ask the question, makes me think that we haven't thought about that enough. Or maybe three people have thought about it. Or maybe it's not an issue and we can just write it down somewhere. But it should be somewhere about the process. Right. Maybe the best place would actually be, as you were just saying, basically the kind of the rationale slash kind of backwards compatibility section of the AP, where it's not some required specification, but it is advice for client implementers. Yeah. And also in the chat, Marius just said something about the separate mempool. Also this is the first time I see this. So again, there's a bunch of ideas and it would be good to crystallize. Again, just in the best interest of the AP. Yeah. Yeah. And I think Asgar, what you proposed makes sense. If you can add something to the EIP, I can also link it in like the readiness checklist. And when get has an implementation for the separate transaction pool, we can also like reference it there. I think the EIP should have at least a mention of it. But then we can track ways people deal with that somewhere else. Right. Is there any value? I mean, what I can do is I can just, you know, whatever thoughts I have, I can just put them into the AP. I'm always with this is slightly worried because it's so much client implementation details that it might just be something that's not realistic approach for clients to take. So ideally, of course, there should be some sort of, I would be kind of happy about some sort of client feedback. Is there some avenue to take here to kind of get clients to talk about this? Or should I just put something in the AP to begin with? And then we can talk about where the clients think that's reasonable. Right. And maybe there's a higher order bit here or conversation to be had around the feedback loops between the EIP process and client development. But I understand this is out of scope for this conversation. Yeah. So I think let's add it to the EIP. We can obviously always discuss it on awkwardness, you know, once clients are like a bit further along. But I think the, yeah, I, my feeling is probably a lot of the EL teams just are not even like that far yet. And so like, if it's in the EIP, at least we won't forget about it. It'll be there. Yeah. And then, and then we can obviously discuss it on awkwardness or on discord icing, but it feels like beyond guess, there's basically not a team that has like spent the time to consider this very deeply. Anything else on the DevNet? And obviously, yeah, I guess just to be like extremely clear, this is like solving this is not in scope for the DevNet three obviously. I have clear status updates on all the clients, which I think all the clients seem trapped on the previous committed lighthouse, Nethermine, Lodestar, Geth and Aragon all seem on track for the DevNet, but I couldn't quite tell what the status was with prism. Yeah, I can give an update. I'm also working on as well. So I'm mostly done with the same PPP changes. I think we are on track. I just, before we start a DevNet, I do really want to run through the EIP44 changes with the consensus there is bad test. And then we shall wait on that just to make sure that we somehow allow the consensus. So therefore, we don't like just fail right away. We definitely do not want that. But yeah, we are on track. That's the TODR. Okay. And that's I guess this means all the client teams that have previously committed are on track. So next week, we're going to have a call like a day before we were expecting to launch the DevNet. Ideally, by then, I guess what's the thing we want? Yeah, by like next week, 24 hours before launching the DevNet, we should have like clear branches or PRs for every client. And is it on every client to add themselves to this interrupt repo? Basically, that's like the thing we would expect. Or yeah, what's, I guess, yeah, I'm trying to get to what's the, what's the like product that we want out of client teams before we launch the DevNet? I think we should do something fairly similar to like Kintsuki or like Envera. Just have some the client tracking she was really useful. And then maybe define some sort of like milestone base like M0, M1, M2, and then you keep, and then you keep building on top of that. Yeah. Okay. Okay. I think that makes sense. I can put this together. I can put the HackMD together, I think. And then, but then we're saying like the client teams, basically, we just provide the genesis file. And we kind of do like Envera where you check your interoperating, you know, it works and whatnot. And then we can add some, I guess, some boot nodes in that file as well. So people know where to connect. But we're not, and this means basically, it's not going to be like a single Docker repo that just runs everything. But it's more like, you know, any team can connect to the DevNet with the right genesis file and peer settings. Is that correct? Yeah. Okay. So yeah, I'll put this together in the next couple of days. Just so we have this DevNet checklists. And do we want, actually, I did like your idea of putting every, of having everything in the interoperate boot. Well, so I don't know. Is that, yeah, I guess. Is that better or worse? Because it feels like it's, it might be better from like the perspective of, you know, it's tractable and we see where it is. Is it worse in that like it makes clients do all this config work that is not really realistic with how we actually run networks and also maybe ends up being kind of a crutch if like you can't run prism separately. But I, I don't have a strong opinion there. I mean, it's it provides that initial sanity check of, you know, is it going to sync with our other client? I guess, yeah, if you had to, if you, so say we use this like milestone approach, I assume it's easier for clients to have a branch that's compatible with the DevNet than to have that branch part of the interoperable, correct? They sort of need the first to get the second. Yeah, which is fine. I mean, because the interoperate boot simply a, you know, it's a sub module which we can point at any branch. Yeah, yeah. Okay. So I'll just, so what I'll do, I'll just separate those out in the, in the, in the milestone stock. So you know, like the second, the last one is like have a branch that people can use to run on the DevNet. And then the last one is like, have that branch tracked and part of the interoperable. So yeah, so I guess then at the minimum, what we'd want for next Tuesday is everyone has like a branch that's working that follows the spec for DevNet three. And then ideally it's all in the interoperable and like one clean place. Yeah. Okay. And we only have two minutes to go. I guess the like next two were more of just like an announcement slash heads up. But we have this and on, on Thursday we have an awkward answer. We want to talk about Shanghai CFI. Oh, sorry. I didn't see your hand. Do you want to go first? Yeah, just in case that anyone didn't see the meeting chat. So there's a bug in the test vectors that we released last Friday. But I am generating the new test vectors and we all publish it in 24 hours. That's not the spec changes. It's a bug in the configuration. Yeah. Okay, got it. Okay, great. And then, okay, sorry, what I was saying is, there's awkwardness this Thursday. We're going to talk about Shanghai CFI lists. There's been four more proposals to add 444 as CFI proto just put this on the awkwardness agenda. He posted an update also on East magicians and on GitHub about like the status of the EIP. So I think that's that's pretty good. Like I think we can kind of present where things are at maybe the one thing where it would be good to have an update is if there's any testing tools or things we've worked on to kind of link those on the readiness checklist. And then the other parts, maybe that would be good to have a sort of written update about it's just the status of the different bindings for the KZG library. I believe every single client team is covered. But just being able to point to that I think would be valuable. So people can kind of know that they exist and that it's supported. But yeah, I guess, is there anything else people think we should try to like explain or kind of put together before awkwardness on Thursday? Okay, if not, I guess it's a good place to end. Yeah, thanks a lot everyone. And we'll chat with most of you on awkwardness. And otherwise, next week on this call to launch this DevNet. Thanks everyone. Awesome work. Thank you.