 Okay, so we're recording to the cloud. Welcome everyone to Core Devs number 98. I just shared the agenda in the chat. There were not too many things on it, so if there's anything people want to bring up during the call, we'll definitely have some time for it, I think. First up we had two new EIPs that people wanted to discuss. The first is a pre-EIP for the binary try format. I believe Guillaume was the one who wanted to bring that up. Yeah, so it's basically a proposal, like try to make a simple tree structure. So this is the first time I presented, so I'm mostly looking to get feedback, and I would like to create the request afterwards. Yeah, so it's been back and forth. There's been a back and forth with a few people, and the goal is simply to have two rules. One of them is, okay, so clearly the first thing is that the tree, the try, is a binary try, and we also wanted to have, because apparently there's a lot of zero-knowledge applications to this, an input where it's just 32 bytes, like, items as an input and one 32-byte item as the output, and otherwise another thing was to get rid of RLP, by popular demand. Everybody wanted to get rid of RLP. I also tried to get rid of extension nodes, but that is clearly too difficult, like the number of hashes that need to be done if you get rid of extension nodes is just too big. Yeah, so pretty simple. Yeah, whatever. Yeah, if you have any comments, or I'm happy with any feedback. So I was just wondering, so this is mainly the format of the binary try itself, but previously you had some suggestions as to how to actually move to that format. Yes. Is that still applicable to this format? Do these tie together? That is independent. Yeah, there's still this other EIP indeed that I don't really want to tie, because last time Peter wasn't really, like, sold on the idea. So yeah, whatever. It could happen with Regenesis. It could happen with the overlay try method. That's, yeah, that's also well. And my question beyond was that it's, we obviously discussed this before, but where does the actual number, if you did not, when you did not include the extension nodes versus when you did include extension nodes, how many more hashing, how much more hashing do you need to do as an example? So I don't have the exact number. I used to have it, but I lost my SSD two weeks ago and I had to recalculate everything. But it was, like, maybe half a second to just, when you had, like, just 1,000 leaves in the tree, it would take, like, half a second to recalculate everything on an average machine. Okay, so, sorry, 1,000 leaves in the entire tree. Yeah. But why would it, I'm basically, I think I want to, maybe I should just go deep into it because I don't know whether this makes sense to me. I mean, it doesn't make sense to me, but we should look at it. Okay. So what basically I'm saying is that I need to verify that the extension, the addition of extension node is justified because, you know, we kind of agreed that extension nodes do bring a certain complexity. And if we can get rid of them without sacrificing too much of a performance, then it would be good. But yeah, it's a... Yeah, I totally agree with that. I mean, there's so many nice properties, even for witnesses and things like this. If you don't include the extension nodes. But yeah, the way I see it, there were two things that really increased the complexity. There was the extension nodes and really all the bit twiddling about, yeah, that is included in the HEX prefix especially. And all the RLP rule, if your RLP is less than 32, then you need to hash it, sorry, then you need to include verbatim, otherwise you have to hash it. In my view, this is where the complexity really explodes. So the extension nodes that I would have liked to be able to get rid of them. By all means, if you find a way to get rid of them and still keep the performance, I'm all ears. I was not able to do that. Okay, so let's just narrow down what kind of performance do we mean. Do we mean the performance of constructing the tree or like an entire tree? Or do we mean the performance of, let's say, verifying the proof with a Merkle proof? Yeah, constructing the tree. But we do not talk about verifying the Merkle proof, right? I mean, let me think. Yeah, no, I think it's the same problem, unless, I mean, verifying a Merkle proof, presumably there will be less leaves than rebuilding the tree, although... So basically, if you've done high extension nodes, that basically means that all the Merkle proofs will have the same exact depth, right? Even logistically. Yeah. And even though you could say that the amount of information you have to carry in the proof is still going to be small, but the amount of calculations that you need to perform to verify the proof will be essentially constant. And that constant will be basically depth, depending on these entire depths of the like 64, whatever it's called. Yeah. You know, I think. Okay. So yeah, then... So basically, like I would like to separate these two things and then look at them separately, like do we, you know, the verification of Merkle proof and the construction of the tree? So I guess... Right. So if you look at the PR somewhere, there's a link, sorry, not the PR, if you look at the draft, there's a link to a HackMD I did to precisely explain this thing, separate the Merkle proof generation from the try, and you could use like extension nodes in the witness, in the proof and then recalculate it one at a time. So that can be done. But yeah, even then the production of a block, in my view, would take too long. But like I said, if you can prove it wrong, I would be... Okay, cool. Because basically my approach to that would be just that, you know, again, when you talk about the calculating the hash of the entire tree, I would also again separate it into two parts, the initial calculation and the incremental calculation. So, you know, even if the initial calculation is somewhat slow, that could be worth sacrificing, if we can show that the incremental calculation is still okay. Yeah. No, that's a fact. And indeed, my calculation was more about building, like rebuilding the entire tree. But even then, you still have so many hashes, because for every bit in your key, you have to recalculate a hash. So doing it incrementally is going to be faster. Is it going to be... No, I think it could be because when you're... So basically, again, when you think about how you're going to... There's a logical structure of the tree, which basically assumes that the initial calculation might be very slow. And there's the physical sort of model where how you store it. And inside the physical model, you can already take into account the fact that there is a kind of extension mode. And then you can basically skip most of those calculations. So basically, I think you can make the incremental calculation pretty much the same as with the current thing, because I think the times when we really touched extension nodes are probably quite rare statistically. So I think we could work on this. I think we could actually solve this problem. Actually, I'm not sure I understand, because you're saying every time we touch an extension node, like every time you're going to write a new value, like a new key value, you're going to have to update the extension node. What I'm saying is that let's say that we could keep the same physical model that we have right now in your implementation, for example, because your implementation essentially has the embodiment of this extension node, which is essentially... But you can still have this embodiment of extension nodes in your physical, but by having a logical model without extension, if nothing changed in that node, you don't need to update it. So my hypothesis is that the number of extension nodes are not that large in practice. And statistically, we can figure out how much it is that we're hitting extension nodes and how hard can you hit them in the worst case. And then maybe that is not going to be significant. So that's what I'm saying. So I'm just having a difficult understanding, you guys. But I mean, what do you mean hitting extension nodes? Don't we hit extension nodes all the time whenever the trial is not fully saturated? No, by hitting extension node, I mean modifying something which is beyond the extension node. So like beyond meaning like closer to the leafs. Because if you don't modify anything which is basically beyond the extension node, then you don't need to do anything with that extension node because you sort of pre-cached the hash of that subtree. So basically, I do not have this data at hand, but how often do we hit that, the things beyond extension nodes? Every time we do a write. Do you think so? Yeah. Because I think what we did. Yeah. Okay. Okay, what I can do. I mean, yeah, I think this is the correct analysis, but let me offer it like, I need to recover the data from my SSD, like what's left there. I will rerun the test and push them and publish them on. Yeah. That's a good starting point. Yeah. Okay. That's a good starting point. I have a question for the node formats, which is currently in a spec. Well, there is a property of KJACH functions that it can calculate. It can calculate hash of 300 and of 130 Q bytes as fast as of, like 128 bytes as fast as a hash of 64 bytes. So I would say if you would get rid of value in a node tuple and also get rid of the main separation hash, which you call the node prefix. It's like you can get radix four, three, having the twice performance compared to radix two. If we assume the check 256 is used. Okay. So if for your information, there's no value in, that's one of the things that got dropped there. There's no more possibility of storing values in the internal nodes. Yeah, I mean, it's, I mean, maybe I just read it wrong that when you say that a node N is a tuple for elements and then later on you write internal cache and leaf hash and internal prefix in another format. It's just a little bit confusing for me, but it's not a problem. I mean, if you make a padding at the main separation, exactly 136 bytes, then you can make kind of specialized, then you can make 128 specialized keychart 256, with their internal date being non-trivial. And then you can use for, and then you can hash for elements simultaneously into a single node without increasing the actual hashing time. So your tree will be twice less deep, but a time you would spend to calculate it is effectively the same. Okay. Yeah, that sounds interesting. No, to calculate a node hash, time will be the same. So to calculate any pass, it will take twice a smaller time. Okay, that's for investigating indeed. Cool. Anything else on the binary tree? Is it worth making this into an actual heap at this point or is it better to just keep iterating on the HackMD? I've got some feedback to investigate and integrate, so it's not going to happen today, but yeah, I'll make an heap when everything is integrated. Okay. Cool. And thanks for your feedback, guys. Great. So the next step on the list was EIP 2926 about the chunk-based code-merclization. I believe Sina, you wanted to bring this up. Yeah. Hi, everyone. This is Sina. I wanted to bring up code-merclization directly after the binary try because they're somehow related. So the primary motivation here is to reduce block brightness sizes for stateless Ethereum. And the basic idea is that we make chunks out of the code and make a Merkle try out of it and replace the code hash in the account record with the root of this Merkle tree. And then we can use this later on when stateless ships to not have to send the whole code. And the current status is that, as you might have seen, there is a spec. There are some values there missing, like gas prices for the create opcode. I'm currently in the process of implementing the spec in Go. So I can benchmark and arrive at those values. And yeah, I just wanted to bring it up today for some initial feedback. So happy to take any questions. Yeah, I have one. Sorry, I haven't actually read the proposal beforehand. So I'm sorry if I'm asking something that's already answered. But how, if we do a multiplication of the code, what would things like X code hash return? I think the current proposal is that in the code tree, you would just also add the shot three hash of the code as a backwards compatibility thing. Yeah, exactly. So the code try also includes some additional metadata like the code length and the code hash for easy access to those parameters. This also might be answered in the EIP and I apologize if it is. Is the where the boundaries for the code hashing, is that something that is controllable by the solidity or contract authors? Like function boundaries seemed like a very obvious place that you'd want to have your code hash boundaries or is it just, you don't really get to choose. It's going to boundary where it does and you can optimize for that at all. I think the current approach is just them hashing every 32 bytes and like, I don't actually even think that the lot, that the losses of doing, of doing it that way are that significant. Like just because of like generally, yeah, functions are going to be a long 32. Some of the, some of the people from T-Mex did an analysis on this. I don't know what their answer was, but I think it did concur with that that, that it wasn't compelling to, to have the, the extra complexity wasn't justified in the space savings. But I didn't realize it was 32. That makes sense. I'm not totally sure about that. So don't take that as gospel. And also we also, we did not actually have a proper analysis of what is the optimal chunk size. I don't know anybody has done that yet. Sort of like what is the empirically, or the optimal size? But I'm not too worried about this for a moment. Regarding this 32 bytes, are there any guarantees such that a chunk never starts on a data segment, or is it blindly 32 bytes always? We're not taking any extra consideration for the data section. The only edge case that I came, came upon was, so at the beginning of every chunk, we store the offset of the first instruction in that chunk to be able to escape push data. And if the data section at the end of the bytecode has something resembling push up code and, and then the data for that push of code goes over the bytecode length. This is just something that has to be handled in the code, but it can be safely ignored. Yeah. When I said data segment, I meant, that's, I meant push data. So what you're saying basically is that if there is a jump to another chunk, we can just load it and in the metadata, see where the first opcode is and thereby determine whether the jump is valid or not. Yeah, exactly. Yeah. So I just wanted to say that we have an all, I mean, I haven't written a IP for it yet, but we have an alternative to this CIP, which will probably appear at some point. So we've been doing some static analysis on the existing code in EVM. And so we just called the abstract interpretation. So what we're trying to do is that we're trying to prove for as many contracts as we can in the, which are currently deployed, whether they ever exhibit the behavior that they can jump inside the push data. And so, so far, I think we have proven for 97% of the contracts that were deployed that it never can, can never happen. So the, basically we're just working on the, doing the proof checker and stuff like this, but the idea behind this is that if we can, I know for basically that most of the programs, which do not specifically kind of want that, if you compile them with solidity, solidity pretty much always generates the code, which never jumps into the jump into the push data. So you actually have to craft the code specifically to do that to, you know, to violate this rule. So the suggestion was there, if we do that, if we figure out how to put that in, then we can even simplify the chunking rule that, that we don't even need to include metadata. That means that for the contracts where we were able to analyze them, we, we put the flag into the account saying that this, we proved that for this account, there is no possibility of jumping to the push data at all. That means that you can chunk them with what you want, the way you want without even needing the metadata. And for those where are, we cannot prove it. And I suspect those are the ones where specifically crafted to be avoiding the static analysis. We simply do not mobilize them. Interesting. I mean, radical, but instinctively, I kind of like it. Have we done analysis on like what per, I guess the question is like, how difficult is this proving and like what percentage of contracts are 97% 97. Yeah. Well, we already proven for 97% of the existing contract that, that they never do that. Okay. So how difficult is the proving is then the next question. So essentially there's two parts of that. There's a proof generator and as a proof checker. So, and we, we hope to have a proof checker, which is much, much more simple. So the proven is basically, we use it. The thing called the abstract interpretation where you go through. It's like you try to build up the graph of the control flow graph and then in the, you know, go through these things. But once you've done it, the, the actual proof is, is, is, is much easier to verify so that the verifier is actually much more simple than the prover. So, but now we are trying to figure out how the big, the proof is going to be and how simple that the proof checker is going to end up. And what I also wanted to add is that, you know, these suggestions about the sub routines and dynamic jumps. We kind of now see that if we did not have dynamic jumps medium, then those proves will be pretty much trivial. But what, what we could do is we could actually go into that direction and essentially sort of back, you know, like backfill that stuff. Right. Do you know what percentage of contracts to use dynamic jumps. Do you mean what do you mean by using dynamic jumps. I mean, they mostly have a jump that is not immediately preceded by a push. Oh, no, no, this happens all the time, because the, the, the most common pattern of using the dynamic jumps is essentially calling a sub routine in solidity from different places in the code. So what it does is that pushes the return address does the job. Right. And then, and the end of the sub routine it has to pop the return address and jump back. So, and that is not the jump preceded by the push, but it's still statically analyzable in majority of cases. Right. Is there any discussion about a new X code tree opcode to expose that internally or do we not want to do that. Just something that came to mind. I don't know. Oh, as in like adding a new opcode to access the, the tree hash instead of the, the cash. My instinct would be against because of the possibility that we'll end up changing the tree again later and we'll have an a growing list of hashes that we'd have to store for backwards compatibility. Fair point. I wasn't pushing for it just to be clear. By the way, speaking of other treeing approaches, I should also bring up the something I brought up once before, which is the option of using Kate commitments. In addition to a more goal root for hashing code. Basically, the arguments for a Kate commitments is basically that you can fairly easily generate a proof for an arbitrary subset of chunks regardless of where they're located and this proof would just be a constant size. 48, 48 bytes, which has the possibility of making witness sizes for code like quite trivial it gets literally just the code that you access plus one fixed thing. I think that I, I still have to look into what I wanted to pull up numbers to see how much of the code proof is for the code itself and how much is the proof hashes. And the other thing to keep in mind is that we don't we're not just optimizing for the average case we're optimizing for the worst case right because eventually where we would want to have like gas rolls around these things and so forth. One, one other thing that comes from I mean, if we don't. If we see this as the current one, one, oh one X. We don't have state witnesses yet. So, that means we would for like in 24 K contract, it would consist of 750 leaves. So actually loading that. I mean implementation wise. I would say that we should would keep the current code side by side in the implementation or should we would we actually load this 751 leaves and concatenate. Have you thought about that, you know. Yeah, so in the EIP we currently talk about, there's a segment for the transition process. And we thought maybe it's, it's easiest to only store the flat code right now so when when a new contract is created. The client a miracle is it computes the route, but then stores route and doesn't touch it again and only stores the normal full bytecode in the database. And because of this we won't need to for now change gas cost for call and any any other code accessing upcode and later on when status arrives then we can we can update. And change this proposal. Yeah. No, no, go on. Who's on that topic. Okay. So the proposal as it is, it would, it kind of already is open to there being try back codes and flat back codes living simultaneously only on the network. Sorry, I didn't get that. What, what kind of codes saw the proposed the EEP. It would still it's not like everything needs to switch over but it would have both of the, both of the types of representation simultaneously. Oh, so. So no, the idea was that you miracle is all of the contracts, including existing contracts. Compute the code routes for them and sort them in account record, but then only store the full bytecode and not the actual leaves and the code try anymore because code is static. And the tree structure of the code doesn't change. And clients, I'm from what I gathered, wouldn't need to have the tree structure until stateless arise. Is that a fair assessment. That sounds good. Another question about chunking in the middle of the push data, if you're going to keep metadata anyways, flagging if the chunk ends in the push data, then could you not have not exactly fixed size chunking but chunking that takes that into account and chunks that around 32 bytes but just at the end of every push data. I think the problem there is that as soon as you deviate even a tiny bit from exactness the relationship between code position and the index in the more culturally breaks down which adds a whole layer of complexity. The central theme here that I've kind of seen since the beginning is that there are a lot of little efficiencies that we could possibly add in around the edges and it seems like pretty much all of them end up not being justified. Just due to the minor savings that they seem to provide and the higher complexity they seem to add. I'm not saying we shouldn't look at these things, but that has been kind of a central theme along the way. Okay, any other questions thoughts on the, the code marquisation. I have a comment but not on the marquisation rather to what Alexi said. I'm not sure Alexi are you still on the call. Yes, I am here. So back back probably in May, when we looked at the sub routines. And we download well we looked at the we just wanted to find first of all which contracts and data contracts. Because I think probably the last hard fork. The copy became cheaper to to utilize when loading more data as opposed to a store. So there started to to appear a quite a few data contracts. And anyway, through the the cyber routines proposal Martin. And I we tried to analyze contracts in the, the state. And as of that work, I had an idea. We could do a once of validation of contracts and mark them in the account if their data or not. Or to say it more nicely whether they can be executed or not. And I wonder if that could be just merged with your work, because you're already setting a flag. And this is, I wonder if how easy would it be to check if the code can be executed first of all, and leave a mark. What, what does it even mean I mean what does that mean that there's a data contract does it have any, what happens if you just send a transaction to something random. I mean that's a thing it's not entirely clear but the tiny heuristics we added is, you know, in how many steps would it turn. And what kind of inputs. But many of them would, would just start with an off code which is invalid, like a truncated push or an invalid of code. And those are clearly not executable under any circumstances. Yeah, so in this particular things if they start with invalid of code that our analysis will mark them as basically okay because they are, we know that they're not going to jump inside the push data. They will terminate. Yeah, but could you have another flag seeing that they will terminate immediately, or they will terminate. We weren't kind of interested in that specific thing we just were interested in whether they are. So we were basically interested in building the control flow graph. We either for those for the things that terminate straight away, the control flow graph is very simple is just like terminate one node. So the if they if the by by chance they have a little logic which does something. I think in most cases, the control for a graph is still quite simple. So I think in most cases if you want to make it like super weird control for a graph you actually have to do it by hand. So, like basically you have to intentionally make this. So if you're, you're just putting in some kind of random data, more often than not, it will have a very simple control for a graph which is basically failure. So one motivation for this is executable flag in the country would be that when a new contract is deployed. It is analyzed whether it won't terminate immediately and if it won't terminate immediately then it's marked executable. And this is somewhat a maybe a replacement to the even versioning in the sense that there always have been concerns when a new opcode is introduced. So the question is how would that affect existing contracts. And if you already have, you know, old contracts which were invalid, maybe because I'm an existing opcode and they're not marked executable that even after introducing that new opcode they won't be executable. I see what I see what you mean. So, so to answer your question is that I think this is basically a special case of our analysis because I said that if we already trying to figure out the control for a graph for anything that we see. And there could be either they could be basically either we can build the control for a graph or we can't. We don't know like maybe it's just break super dynamic. But if if you look at this control graph and then it could see that it's clearly always failing because it will be obvious from control for graph that is always failing. Then you can mark it as non executable. So I think what we do is basically a bit wider than than what you're asking. So yes, we can do that. Yeah, just just to add some context. So the analysis we did Alex. It's focused on a different thing. And one thing that caused problems for us were not the type of data contracts, which are just blind data. There's a type of data contracts which basically function that you do a delegate call to them and that you delegate call, for example, load segment one. And then it just it does an internal code copy from the code to memory. So you get like segment one and then you can continue execution because you got that in memory and then you can do that again and load another segment of whatever is data you need to load, which is cheaper than doing like X code load index one to blah blah blah. Or at least it's easier. I don't know why but that's the type of data contract, which is actually executable. But which does contain like why random data, although it should not. I mean the control flow graph should not lead into that random data. Well, basically what if you look at the, if you think about the this particular contract. So when we do delegate call, right, from the point of your control flow graph, it doesn't really matter what this delegate call does because it just returns with either a failure or success and we know that the top of the stack would be just either zero or one, and then it does not really affect control for a graph so we can see that our analysis will probably show that this is executable contract. And it does not either it basically does not violate the property that it never jumps, basically it never jumps into the jump in a push data that's what all we wanted to know for medicalization essentially. Yeah, so maybe I have brought you on the wrong track because I was just trying to remember all the context and it's actually multiple different discussions and ideas. The data contract question is just one, the having invalid instructions is another one. Perhaps nice to discuss this and I don't think the all credits is the best place to discuss it. Yeah, yeah, because because because basically we're, we're doing this control for a graph analysis not not just for the code miracleization but another thing we would like to use it for is for data dependencies in the past transactions to be able to sync faster. So we want to create the data dependency graph to be able to parallelize the execution and all this kind of things. Maybe as a closing question should we consider like a brainstorming call like one of those breakout things or just have like a text discussion on on the old codef channel, what is the best way forward. I would say that it depends on the priorities to be honest. So I see the kind of we could we could have a probably like a some sort of background discussion going on about it. Is it worth maybe having just a new discord channel. I feel like there might already be one for this. There is already one I think I don't we don't need more discord channels nor so if there are people who are interested in the discussion I think we could so self organize into in so if people have time and its priority and we can talk about it. Because what I don't want to do is I want to create a discussion, which is for something which we're going to do in like one year. So we have to detriment of something we have to do like in the next two months. So we have to prioritize these things properly. But anyway, if anybody's interested in having the discussion. I suggested we just have a chat and self organize and just have a talk about it. Okay, sounds good. And yeah, we can use just the basic our product. So next up on the agenda was the yellow networks. So James shared kind of some specs for both yellow v2 and yellow v3. Let me just post those in the chat. I guess, first of all, it's probably just worth asking the different kind teams border out with yellow v2. And then maybe we can have a just follow up conversation on v3 and the status there. So for v2, which was basically the EIP 2537 the BLS curve 2315 the sub routine and then 2929 the gas cost changes. Yeah. Artem, I see you're on the call turbo get is not doing yellow v2 right. No, we're not going to answer that. Yeah, we're not, we're not doing yellow v2 at the moment. Okay, again, we, there's other things we have to fix. Yeah. And then sorry I'm going on order on the screen I see here so Martin, any update from guests. Not really with we still have not actually merged over to I was hoping to have that done by today but no. However, we do have a run did a bit of fussing with the all over to against be so guest versus be so it's pretty slow. I don't know why but so far I haven't found any differences. And as for test updates and know that has been worked on on the converting the standard tests to 2929 rules on you. Yolo v2. And that seemingly is a pretty large piece of work because it changes all the expect sections or rather a lot of expect sections start to fail. And that's the tests are do not get filled properly and Dimitri is working on that. And also Daniel has done some work on that so I don't know maybe then I know better than me what's the status isn't that. Yeah, I've is Dimitri get some checked in I bring him over. And I run them on guys haven't seen any differences yet since we got those last few issues relating to the constant values resolved. So as far as reference test goes looks fairly solid. Yeah, and so basis got all three of the things for Yolo v2 and we're ready to go on it. I think the EVM test I updated on the bug I think it's because when we use the Bitcoin ECK 256 whatever it is. We do the default randomization to prevent side channel text, and I'm thinking that it's blocking on native entropy. We've seen that before some of our unit tests so there's a special flag you can put in the environmental variable to turn off that randomization and that should stop those arbitrary one to four second results waiting for the need of entropy to catch up. Yeah, just another thing that might be worth mentioning is that, in my opinion, the 929 gets huge coverage from the existing 22,000 tests. So I have to look into if there's something that is not being covered. But, you know, since it changes pretty long list of actual normal op codes and doesn't need to do some new things. I would say that it gets great coverage from the existing test coverage but I do think we need to write tests and maliciously try and mess with the warm cold list across transaction batteries. That's the only test hold that I would want to have fixed. Yeah, it might be that we don't actually do we have any state tests with multiple serial transactions. There are some accidentally in the create series. Right. Yeah, we might need to add some custom block test. I think there's some call delegate call call stuff that might trip it but I don't know if they really really I doubt if they're really sequential transactions in a block. Yeah. Yeah. Yeah, but something that's deliberately trying to break it I think would add value. Cool. So I guess yeah that covers it for the base you update as well. Dragan, I hope I got your name right. You're from open Ethereum, right? Yeah. Basically, we're not going to participate. Here's some current tasks that we need to focus on. Okay. And then Thomas, never mind any updates. Yeah, sure. So 2929 I was looking at it. I think we'll wait a bit until the things to build as I stabilize this with tests. Since we have a Gaff and bezel on it. I think when there is a test network for your lobby to just edit quickly. I estimate this is like maybe maybe a day of work. It's pretty complex. So I want to have some tests to speed it up when we start to work on it. I was working on three other EIPs. I don't know if they fall under your lobby to, I believe not. It's 2935 to 565 and 1559 for sure. Okay. And I think that's everybody. Anyone else want to speak up that I might have forgotten. Okay. So I'm going to start with the request that is the IP IP meeting on like as a part of new process that we are starting for the standardization and network upgrade. So I'm going to start with the EIP and that should be reflected at the time when we are getting into the F Merrell or into the testnet and all, like, at this point of time, all the EIP, the two EIP two, three, one, five, two, five, three, seven and 2929 all are in the status of job. But now the work has enhanced them and like, we are looking into DevNet. So I'm going to bring into the attention of authors that it would be worth considering the change of status and they can make the request change with the auditors. Okay. And sorry, maybe I missed that the what would be the appropriate status for the ones that are in progress. The first one is review and if it is passed review then we can go into last call. For example, yeah, yeah. So review is basically like we're working on it. Review means that the spec is at a point where other people should start looking at it. Draft is really meant just for I'm working on this and maybe one other person is looking at it but no one should look at this because it's not done. Review means we think it's done as an author. Other people should now take their time to look at it. Pretty much anything that has made it into this call is probably should be reviewed by now. Like making it into this call means you want other people to look at it and so you should really be moving into review by that point. Okay. Good to know. So yeah, for the authors of all the stuff that's being discussed for the yellow networks definitely worth moving the EAPS. And so in terms of next steps for yellow v2, it seems like guess and base you are going to keep working on adding tests and setting up the network. Then another mind will probably join shortly after. And that'll kind of be it. On the last call we discuss a potential yellow v3. I'm not sure that's worth. You know, if discussing now given we're still kind of wrapping up yellow v2 or. I don't know what the people think the yellow v3 was basically yellow v2 plus 2718 the type, the transaction envelope and 2930 the optional access lists. So it feels like on the last call, I don't know people have like different opinions about how valuable this would be. Whether or not this list made sense. So yeah, I'm just curious if people have just general thoughts about the idea of a yellow v3 and whether we should do it or not and if so whether the current list of each son it makes sense. So the team since you mentioned yellow v3 and 2565 is not on the list, I think then it means that it's in yellow v2. So, let me add something I was finally running the benchmarks in the mind for 2565. And surprisingly, for most of the operations on the more decks, the previous gas pricing was better aligned with the results that I've seen in the mind, and we're using this like built in library for became teacher I already reported that to Kelly and waiting for some analysis from their side on what other options do we have on libraries or whether there's something else that we can do. Yeah, Kelly I know you wanted to bring up 2565 is also. Yeah, maybe that's like the good. Sure yeah so I think the quick update is so it looks like this who has now implemented in the test vectors are matching what they're supposed to be with the new pricing. So good progress in that direction, as Thomas mentioned, you know when he ran the benchmarks and we looked at it with the new pricing, there were some items that were down, you know in the, there's one I think as low as like seven or eight million gas per second. So, you know, unfortunately, this is another case sort of as we had with open ethereum where the native, or the standard modular exponentiation library is is not as performant as it could be. We're just starting to look at what that looks like for net and what alternatives are. You know the first pass looks like that one is at least for dotnet it's built for like 32 bit architectures and so you know it's it's slower for some of those reasons and so we'll look and see if there's a an easy alternative there. I guess, you know that being said, you know, I would be personally interested in in if you'll be three happens, you know, you know we know that there could be a sort of denial of service vector given that some of these are down near 10. But I think, you know, to the extent that we can test it from a cross client perspective and making everything, you know, making sure everything works, assuming we can fix this dotnet performance problem. That would be, you know, one of my one of my preferences because I think all the teams that are planning to participate in in yellow v2 or v3 have now have now implemented that and have have the test vectors passing. Just to note, when you mentioned like seven, eight million gas per second, what is the, how does that compare to the other pre-compiles. I think that's an important metric. Targeting 40 40 million per second in that amount for all the operations. Yeah, we're getting that from like shot to the six and like. All right. I think this is actually depends a lot if you have a turbo boost one or off in your base. I think this is one of the major things that I think, you know, and maybe it's something that I'm could potentially spearhead sort of moving forward. You know, on, I think, you know, as we move forward in these pre-compiles are more important right for things like rollups and they're getting more use the gas repricing situation is not. It's not crystal clear right we've got multiple different clients. You know, what you know is 15 million gas per second the right, you know, number like that's what EC recover is on geth it says lowest 15 million other folks are targeting 30 or 40. You know, there's no standard like are we on a four core MacBook at two gigahertz are we on an eight core at, you know, five gigahertz. So, I think one thing at least in my perspective that would really help on these repricing things moving forward is like, maybe even to have an EIP to specify like a reference configuration for how to reprice these things because I think, you know, it's very hard to do apples to apples on any of these things when people are running them at different frequencies different number of cores. Right, and that's the problem when we try to, when we talk about these absolutes gas per second, and I really don't like that I think we should just try to get them to be, you know, roughly in part with each other. Right. And I think that's a that's a great point I mean you know if you go back and look at some of these other EIP is you know some target to try and get them on par with EC recover. And as I mentioned with, you know, something like guess that's closer to 15 million gas per second, whereas, you know, never minds been targeting 30 or 40. Well, even with what is important here. Right now, there is an accusation there. Sorry, what was that Alex. I would say another Alex comes first, or like nothing was Thomas. Thomas. Okay, well Tom. Obviously, I agree that we cannot talk here about the ups and numbers that the way I would report it is that the gas results that we see on those particular test cases for mod X, that make it one of the slowest, if not the slowest operation after repricing. And it's much slower than other precompiles. Yeah. So the problem with non absolute values as even worse. As Martin mentioned they use shot to 56 which is also as a result so measurements for 266 EIP is quite different on different platforms and implementations. And even now, if we continue to use is recover as something like a baseline. So that everyone uses C library. First of all, you can compile it differently to get quite different performance you really want. Second was recent optimizations in there and expiry of the pendant on multiple fast multiplication. You will get a bump in there like in a factor of 30% list. If you try to estimate the gas price as a gas per second constant based just on see implementation of easy recover. I would say the absolute constant is the only way here based purely on the frequency for such operations. Yeah. And the platform of course. I mean, I'm a little bit inclined on maybe not an absolute number or some sort of range or some sort of minimum with Alex because ultimately, you know, we have an interblock time, and we have a gas limit and we have a gas per second so like the goal is to ensure that execution is, you know, in a within a bounded amount of time right so it does seem to me like you, you know, if we're saying execution can take up to one second of the interblock time or two seconds or three seconds. You know, we should be able to translate that into a gas per second. We actually know what we should do is make sure that whatever the use where I mean whatever runs on the EVM whether it's a simple loop or it's crypto operations through a pre compile. If the miners have decided it's 50 million gas, then it doesn't matter what what we execute it takes roughly them a same amount of time, whether it's a simple loop or if it pre compile or that's the like goal. If it takes too long time then they will have to lower the gas limit to make it faster again. And as long as we have a balance, it's not like someone can some blocks will suddenly take a minute just because they someone use the pre compiled to do the night of service as long as it's balanced on like your everyday hardware practice. So I don't, I don't think we should have this idealized reference hardware where we. Yeah, I don't think that works will work as well. Thanks. Thank you to do the measurement because even if we try to do it with the current ones. There are two options, one of them we measure each of those and calculate some gas per second constant based on each of those. And then we have to ideal and linearly reprice each of those up or down, assuming that the initial formulas for gas are even correct, and they are not for some functions. So, at least the baseline of some flexibility to 35 is turbo boost off to get some, and it was basically what was done in fall to fire to 666. Like, say that there is a constant on your computer benchmark everything and set a gas formula. And it works forward perfectly, but it doesn't work backward because you cannot say that your formula was correct the first place. Especially if it's for variable length inputs. Also, if I wanted to if I wanted to say something so yeah it's a very complex problem about what to do with the three compiles because yes I do see that when metallic said to me on Twitter that you know they're calibrated to 20 million gas per second I was like, what's why is this happening. Because what I think we should expect is that let's say that if they are now calibrated to whatever 20 mega mega gas per second or 30, whatever that number is. So that means that if we were to lift the gas price or the gas limit for any reason, then one of the things we have to do is to raise the cost of these operations again, because otherwise they will be the now service vector. So essentially, the only reason why you could essentially lower the cost right now is because the bottleneck is elsewhere. If you remove the bottleneck from elsewhere, so that you're kind of you can increase the gas limit. Then, most probably this the pre compiles will become the bottle the next bottleneck that you would need to remove if you want to system to to grow. And I think we either have to be comfortable with the idea of raising them up again after we've lowered them, which shouldn't be a surprising for people, or we should say if that is the not desirable. And we should look at the, let's say that the only the reducing the cost of operation is not the only way to make it relatively cheaper. The other way to make it relatively cheaper is to increase the gas limit. So we should look at it both from both perspective, not just from the sort of single perspective. Yeah. Yeah, generally I agree that we should be looking at what the current bottleneck is. And whenever we create a gas price repricing that introduces a new bottleneck then it's potentially a problem. So what I'm just suggesting is that after repricing the mod X with the current library in modern in other mind becomes a bottleneck. So the repricing would make it worse. I think we just have to think whether we can look for a different library here. We need to have a library to use that would be with a proper licensing. So just because we're kind of spending a fair amount of time on this is it worth kind of time boxing you're moving the conversation somewhere else. Okay, so never mind you'll look at a different library. I think it's worth having the conversation around like what's the right benchmarks offline. So I thought of this call. I mean, maybe one question for Tomas, you know, in terms of yellow yellow v3, you know, do you think that you could run forward with 2565 using the current library for now just to get the cross client tests of the IP 2565. I mean, since we're changing only pricing, there's like almost no benefit in doing the cross client testing. Everything will be covered probably by the consensus tests. It's not the particular type of VIP that benefits so much from the cross client test net. It's no problem to edit but it sounds more like pushing into the test net with a hope that it will be there rather than just benefiting from from testing it. I think it's relatively easy one to include an important one and I think what we should do, and you make it more likely to be included as just suggest one library. C or C++ library that can be compiled and added, and it should be super simple. We don't use this big integer library the native one you never mind anywhere else, because for all the 32 byte operations. We, we have now a separate you in 256 library. So this is the only place because here we can have arbitrary length of numbers. Okay, great. Yeah, we can do that. Okay. And so yeah for yellow v3 I guess we had a list last time, again which had the 2537 2315 2929, which is yellow v2 and then added to that 2718 2930. Given that like only geth and base you have that yellow v2. And other clients are not doing it. Is it worth having a larger yellow v3 how the different client teams feel about that. Is it worth other other things we'd want to include in it. Yeah, what are people's thoughts on that. Sorry. Go ahead. Go ahead, Thomas. Thanks. Sorry, the 2935 because to answer your last question whether is there anything else that we want to add on Tuesday there will be this presentation from me on 2935. So if if people will be interested and think to include it then this is the one that I want to propose but apart from that I don't have strong opinions about yellow v3 v2. I think between the bezel gas and other projects. Yeah, so one thing that the presentation Thomas was talking about it's about people and eat and we invite all the client team and people who have any questions on these particular proposal. Earlier we also did an episode for 2565 I'll share the link in the agenda for people to record and for the next proposal. We'll make an announcement how we can join the call. Okay. For. Yeah, go ahead. For yellow v3 just as a reminder this, I don't know it's just came up in all corridors or one of the other meeting many meetings that we now have. So yellow v3 appears has turned into pre Berlin. And so I know the original intent that ever that we wanted for yellow was it's yellow doesn't matter what goes on here it doesn't mean it's going to Berlin, but that has changed just naturally to this is pre Berlin so something doesn't make into yellow v3. Then it sounds like it probably won't make into Berlin. So keep that in mind for do we want to be a little v3 and what will be in it. That seems to be the implicit kind of what's happening. And I guess that's of yellow but that is how it's working. Yeah, I guess that's worth digging into because, especially based on like promises just commented out 2565 if that's something that doesn't benefit from being a part of yellow network. I assume we could still make it into Berlin. Yeah, it's a good example actually I believe that 2565 can make it to Berlin without participating in the multicline testnet. It's just a matter of finding one more library. Not too difficult. We just need to resolve this small problem and testing should be entirely encompassed in the consensus test. And I guess the other option is we don't have a yellow v3 and we just, you know, come up with the final list for Berlin in a couple weeks. The people feel like that would be a better approach. Maybe reasonable. I see James press which is on the call and I know last time around you had some questions about what's it 2539 your other BLS curve, I believe, and the fact that it was quite similar to the one included so I'm curious what like from someone who's a bit more like outside the process and just trying to get something in what what do you think would would provide you with the best outcome. You might be on you, James, we can't hear you. She was unmuted in. Yeah, just no sound coming out. Yeah. Would anyone oppose like not having a yellow v3 and just moving straight to to having a Berlin test net list. And that means that. Through 718 I think is the kind of change that definitely requires multi client as it can be in a separate AP specific test that I wanted to suggest to focus more on the AP specific tests. Okay, and you're saying that that would be prior to having it as part of the Berlin list right like we'd want to see it running on some sort of test net before we decide whether or not it's in Berlin. Yeah. James you're back. Yeah, we can hear you. Still. No, we can. Yes. Perfect. Um, sorry, I've been holding my tongue through the conversation a little bit and then was muted when I tried to talk. So I don't want to be just someone like outside this process that's trying to get something in I'd like to be sticking around long term and contributing. So I'm going to speak against my own interests a little bit here. It feels like the every two weeks we move the goalposts back on your low v2 and your low v3 by two more weeks. So the last week decided that there would be a low v3 and that its EIP list was set. Two weeks ago, sorry. And now we're kind of reopening that and questioning the decisions we made two weeks ago. I would love to get some kind of like defined process around this. So issues like whether, you know, me coming in as an outsider within EIP. So that, you know, like issues like what EIPs get in and what the deadlines for those things are are like, you know, set and we can know those in advance so we don't end up with this kind of process mess every two weeks. Yeah, this is kind of a tangent. I strongly agree on that. Dano had put an eep forward, I think a year ago that tried to address some of that that EIP 1872 where we could kind of preset some dates in the future for when we would actually have the main net hard forks and that allows you to, you know, go backwards and set deadlines for stuff. And I guess the reason why this process with burden is maybe a bit flunky is, it's the first time we've tried these yellow networks, we didn't have a fixed date for burden so I agree it kind of leads to like growing and reducing the scope and kind of pushing things back. If what people, you know, care about the most is just having things move forward, then yeah, I think it makes sense we keep yellow v3 as is. And then maybe we have one last like free Berlin. It wouldn't even need to be a test that I guess it would just be like running this state tests so it would be a better approach so we do yellow v2 which is basically done, you know, in the next week, start implementing yellow v3, which I suspect will take us another two wish weeks and maybe set like a final date for inclusion to burden for stuff like 2565 to be resolved. I think we were waiting for 2929 being tested covered implemented in all the clients and this is the main delay of Berlin. I'm so sure. For me, I think 207 18 is the same seems like big. Is it going in in Berlin 207 18. It's part of yellow v3 or whatever that's worth. And so it's 2930. 1559 and other changes. I don't think it was planned for Berlin is it now. 207 18 is, sorry 2930 is dependent on 2718. And there is a very strong belief that we should not do 2929 unless we also include 2930. And so there's also a strong belief that we should include 2929. So those three basically kind of come together as a package, either we include all three or we don't include any of them is kind of the situation. So since 2929 is very strongly desired. That's pushing all three of those into Berlin that's causing Berlin to delay. I wait for all three of those. That's what I meant if we will have 2929 implemented everywhere I agreed on, it will practically mean that Berlin is ready because whatever was required for 2929 will be included as well. Yes, if you consider 2930 to 2929 to be dependent on 2930, then yes, that is true. The thing is, it's not from a technical perspective, it's like from a UX perspective, right 2930 allows. Yeah, did not break to keep accessing certain contracts. Yeah, the fear is for those that haven't been following along 2930 will introduce access lists, which will make it so it is possible to not have your contract completely inaccessible because of 2929 gas repricing. Because you'll be able to use access lists to get those gas prices back down to where they were if you absolutely need to, but it is not required by anyone to use them. So the fear is if we do 2929 without 2930, then there will be contracts, maybe that are completely inaccessible because it's not too expensive to call them. So 2929 is the only AP at the moment with enough of the fear of urgency that that's why we delay everything else just to have 2929 packaged properly. And it ended up being slightly more difficult that we were planning as just raising gas prices now it's it's trying to solve a few more issues and it comes with other things. So we think that there are any other EAPs without 2929 that we really want to push before pushing 2929 I don't think so. So this one is already included but I think that 2537 the BLS pickup was another pretty important one. Even though we might not ship Berlin prior to the deposit contract going live. And I think there's a lot of value in being able to validate those deposits on chain just as a refresher the is are there people who still demand 2930 in order for you to go in. I don't know if we actually know that this will break any contracts there's just a fear that it will. And I personally am okay with breaking things but you know I'm much more aggressive than most people here so do we still demand 2930 if we're doing 2929. So I think I believe that if we don't things will break and users will be very very upset with Ethereum. That's not for me then. As long as there's someone here that really strongly believes 2930 should go in then I'm on board. I just just wanted to very briefly raise that I would like to stress the question how confident are we that there are indeed any any contracts that would that would be that would basically break. Which one I think I got the number wrong but but like because because I just think adding a new new transaction type with access lists. I think there are concerns around like the usability of access lists in general with like some use cases might might not blend themselves with like some static state access concerns there. It's to use with access list anyway and then obviously that that is a rather large commitment to adding a new transaction type that then has to be like carried along more or less to infinity and if it turns out that it will not ever really be used. I just I'm not necessarily against it I just basically want to make sure we we think it's it's really necessary and that that's a trade off we want to. So one thing that we could do. Now that I'm fairly certain that the implementation is in line with specification. I can redo the same analysis that I did on the girly that is take a couple of blocks rerun the transactions on 29 29 rules and see if they start failing and if they start failing see if they also would have failed if more gas had been given externally. I'm sure I will find a couple that will still fail. So, yeah, such an analysis, I can do it's quite intense. I mean, it's a lot of work to weed out some of the results. But if you want some extra clarity of these cases are actual. I can see if I can find a couple of such cases. Although if I don't find them, we won't know that they don't exist because I can do it for the entire. It's our chain how unreasonable is it to how unreasonable is it to just run the entire blockchain just set the gas like arbitrarily set the gas limit for block very high and run the entire chain and see if there is any single transaction in the last six years or whatever that has ever used more than 10 million gas with the new gas pricing. Yeah, it's like none of the clients have an easy way to do that. No, I had a way to do it like yes on a block. Nice if you don't have a way to just rerun the whole game with gas prices. No, well it would basically be like full thinking but a lot slower. Yeah. Do any of the other clients have an easy way to do that just out of curiosity. Like that would be an ideal test. We do have the way to do it. And it's, you know, because I do actually do the sports for cold traces. And at the moment I think it would run for about two days or something to do something like that. Yeah, there is a possibility to do that. Although I wanted to actually suggest the potential alternatives to EIP 2929 because as a lot of people know that I'm still mildly opposed to that. So yeah, but I was going to just ask because we started to look at this more specifically a couple weeks ago. And one thing we did is that we looked at the using some kind of filtering bloom filters for example, and we were able to successfully defend against the most potent attacks that were that basically the EIP 2929 designed to to protect against. So obviously those filtering, we need some nuances but essentially it's the way to do it without changing the rules. Although there are still attacks that are hitting not the absent data, but the hitting the existing data but there these attacks are a bit harder to mount and they probably not going to, you know, you cannot sustain them for a long time but we are looking into those as well. I still think that the EIP 2929 is this too complex for what it needs to do. I have the same feeling actually. And actually I'm going to, we all share because we're doing an analysis now on different type of filtering. We've basically what what what we have already done is which works. So we've done the bloom filter very simple implementation of bloom filter with, I mean we took, let's say, half a gigabyte long filter and also quarter a gigabyte both are with 15 functions. And so with the half a gig, so that's only for accounts and so the half a gigabyte boom filter is able to protect perfectly from the, from the, so basically it has zero false positive when you try to read the one existing accounts. If you reduce it to, let's say, quarter a gigabyte, then the false positive rate is something like 0.01% or something like this. And then we're also going to look at the other filters like a cuckoo filters and stuff like this because they allow deletions, but generally I think we might be able to find a way to kind of at least protect against those attacks and move on to the next level which is the kind of more expensive attacks but it could still be executed. And as I said that it doesn't require the change of rules and doesn't require this, basically, I think the IP is becoming a bit too complex from my point of view. Yeah, I mean, the way with what you need to do with bloom filters is more or less the same we need to do with the plastic be that you need a layer of bloom filters that goes back as far as you would want to be able to handle the reorg. It can be done. Well, basically with the, with a cuckoo filter you actually can handle reorgs pretty well because you're, you can delete from it as well as insert into it so with the bloom filter the problem is that you can't delete stuff from it. So we have basically two minutes to go sorry to kind of jump in here I'm not sure what's like the best next step here because yes we do have the Lester yellow v3 and yellow v2 but is it even worth implementing all this stuff if there's a high chance that we don't end up doing it. I'm curious what. Yeah, what people feel is like the best next step for 2929 and then the bundle of 2718 and 2930. I'm just discussing a sinker. Go ahead, sorry. No, sorry. So I was just going to say no that I'm still pro 2929 and 2930. And one of the things I think missing from the bloom filter approaches we would then have to integrate that to the sink or everyone would have to sing from zero to share that data around. Oh, no, that's not necessary because you're you don't have to sing from zero to construct the filter. You need to just simply iterate through the state and build it initially. So that's that that this could be done, or you can even like what you can do as well is that you can download the pre well, not pre made but because actually everybody's bloom filter has to be slightly different so that But then we're adding an extra complexity that you have to mess with your sync system and then you're going to have to solve your blue filter seeds. So it's not, I don't think it's any simpler than 2929. Well, yeah, we could that probably have more complex. Okay, so I guess we should probably also just continue that conversation on the core devs chat. I know this is. It's not kind of a great outcome in terms of clarity of process, but it feels like it's, it's probably worse to push like a yellow v3 forward if we don't have clearly on that especially given that the implementation on yellow v2 is, is not fully done. Hopefully we can agree, kind of maybe a sync on what the yellow v3 spec would look like. Does that make sense for everybody. 2929 if we know what happens with it will know what happens to Berlin and which other IPs will come in. That's like everything else is just dependent on this one this one was considered critical. Yeah, it got a bit more complex. It has some dependencies. It doesn't have an agreement around it. And everything for Berlin and whether it's yellow v2 v3. If we want to bundle the IPs together everything collapses around 2929. Why do we do test that which will be separate for each IP to allow people who wait for those IPs to actually push them forward, because many of them are totally independent from 2929, or we bundle everything for one big Berlin and will be delayed by two three months. So what are your suggestions? I suggest to stop bundling the EIPs or EAPs into yellow v2 v3 allow people to actually push for EAP specific test nets like that will be able to show that two free clients can sync on particular EAP for example to 718 or 2565 or like this one doesn't even require a test net but but things like whatever other people are suggesting because they're waiting for Berlin to be defined like here James comes over and says there is something that is potentially important for them and potentially useful for the network. And it's totally not dependent on 2929 but they are locked here they're waiting for that and I feel paralyzed so we need to have a path for them to if they want to spend time and provide the working prototype that shows this is how we can implement this change and all the clients to agree. So I have a proposal for this and and it's in a way it's iterating over two great things that we were discussing in the last year one was the EAP specific updates which is fine but it only gets much much stronger with the EAP specific test nets that we started introducing recently with EAP 1559. I think by binding these two things together, we can show exact path for any external teams how they can deliver the fully tested described and analyzed EAP specification that can be easily pushed all the way to maintenance. If everyone agrees that it's beneficial for the network for pretty good use cases or an improvement and stop bumbling things and making them delayed just because some things are not agree them. Yeah, and I think James actually your, your team was already working on a test net for EAP 2539 right. Oh yeah, it's been running for a couple weeks we've been buzzing the implementations against each other. Yeah, so maybe, again, we won't resolve this three minutes over the call but James if you can share that test net on the court of Gitter chat that would be great so people can have a look at it. Yeah. Yeah, sorry, I said that this or this Gitter chat still so on the discord. That'd be good. And then the other thing that was on the agenda which we, we can maybe very quickly cover is we were talking on discord this yesterday I think about the fact that most clients seem to be working on a sort of flat database approach. So get has snapshot base you have bonsai trees and other mind is working on on something as well and turbo get this obviously architected like that from the start. And because we're all working on kind of different flavors of it, it might make sense to set up a call to just discuss that and share notes on it. Would people be up for that. Generally, and we can maybe find a time on the on the court of chat if that makes sense. Okay. Cool. I'll, I'll, I'll post something on the quarter chat right after this so we can, we can coordinate the time. Any final things that can be covered in less than a minute before we we jump off. Okay then. Thanks everybody. See you. Thanks everyone. Bye bye. Come on.