 I'll share my screen and we can see how that works. So this is the checklist right now. These are the EOS EPS that we've been working on and the open issues. So the first one is just renaming all of the EOS EPS to have this prefix. So I'm a little choppy. Can anyone hear Pavel or is it everyone? I think each is on my side. Okay. Yeah, I'll be in some better spot in 10 minutes. Okay, sounds good. Yeah, so I mean, in the meantime, we can go ahead and start going through some of these things. The first one is more of just an operational thing having this EOS prefix for all of the EOS EPS. I think this makes sense. There's a question. Does this make sense for static relative jumps? I feel like it does for now because we're sort of considering all of these for Shanghai together as an atomic package. So we might as well just do this. And if later on, for some reason, this falls through, we can just remove it. I don't know if anyone thinks differently about that. I'm just gonna have the reason this came up because there was no nice document like this listing out what is considered and like a entry point, people could find all the information. And so what people did, they were rather graphing for UF on EIP study.org and which is currently not possible. Right, makes sense. I've been trying to push people to this page when I can as well. Just gonna think that this is a good overview place, but yeah, it would be good just so they can grab through it. So yeah, if you guys wanna accept this PR to make the modification, that would be great. First EIP to talk about is 3540, the object format be one. First question, move generic contract creation rules from 3670. So this is basically adding this logic about how to deal with transaction creation. Yeah, so I think my point of view, it makes sense to actually move it. So the product only actually like copies it from the other one. But I think we can move most of this like generic logic, like how contract creation fails, like when it fails, like what is the gas cost of that? Because they're kind of general rules about TOF. For some reason, like I don't even remember, they end up in the second EIP, but I think that was already seen as a confusion. So that's later. So yeah, I'm for like moving what's possible to the main one. And I think the code validation will be just smaller because of that. So yeah, makes sense. I'm in favor of that. Okay, next we have a question about whether we should forbid EOF deploying legacy contracts. I think yes, but I'm also like very far on the spectrum of generally restricting what people can do with respect to legacy. So does anyone have an argument for allowing EOF to deploy legacy contracts? I think the main question there is, should there rather be, is there a use case where this is desired? I think we asked this question a bunch of times and maybe there was some discussion around it at DevConnect, but unless there is a use case found which this would prevent and the use case would be important, I think it would likely simplify everything if this wouldn't be allowed. And I think it's true to say that if we disallow it and later on want to allow legacy contract creation, we don't have to bump the version. Whereas if we do allow legacy contract creation, we want to remove it. Yeah, we might end up breaking things if we remove it later. So it seems better to restrict as much as we can in the beginning and see if people request the functionality. Does EOF break legacy? EOF does not break legacy contracts. Okay, so I would say go forward. Let's go forward with this. And if anything comes up where people are saying they have a use case for it, we can discuss more, but I'd say let's just be restrictive in the default case. Okay, clarify overall code size limit still applies. I haven't looked at this, but I assume this is meaning overall is in like the whole EOF container not the sum of the EOF code sections. Yeah, I don't think he modified the meaning of code size limit because Yeah. That is done one step outside of this. Yeah. So my, at least my assumption was that the overall, I mean, what is called today code size limit, but we tried to reframe it to container size limit, but basically did the current limit would apply to everything. But we did have a discussion, both with the context of 3860 and EOF. What if the limit would be changed? So what 3860 says it's the creation code limit is twice of the deployed code limit. And we were thinking, what if he would actually modify both of these to be slightly bigger than it is today? That is like another discussion to be had at some point. Yeah, that makes sense. It was confusing with us about like, what does this mean? I think we always intended to be like the whole thing. Yeah. But maybe that's, I guess we might need to read the specs to make sure it's clear. Yeah. I mean, for that I believe you need to investigate the origins of the limit, why the limit was introduced and what it's supposed to cover and restrict. And I think it was more about straight state growth and not about like, jump test analysis. But if it would be only for covering like the jump test analysis, then of course it would make sense to say that it only covers the code section because that's the only one which would be affected. But the main question in this is, are we even open to complicate this limit? So like yeah, be 170, are we open to complicate it such that it wouldn't cover everything but sections? And I think the likely answer is no because that seems like a quite a complex rule set. But maybe that's something we have to answer. I would also agree that it would be better to avoid complicating this 170 rule set, at least for now. But yeah, I guess I also was thinking that the code size limit was originally in place with the assumption that jump test analysis was not linear in cost. So in that sense, it would make sense to just do it over the code sections. But yeah, I think for now, my preference would be to just stick with what exists there and run the limit across the whole container. I think it was not only for that, but also the state growth because I think it was state growth in the sense that some of codes were not felt priced and the bigger, the downloaded containers, you could craft much bigger code of using those op codes. So I don't think it was related to jump test, but it would be, yeah, we should really ask, maybe Martin, you definitely should know the answer to get an understanding, why was the limit added and that should answer this question too. Okay. Anything else on 3540 that we should discuss? Okay, 3670 UF code validation. We didn't mention this, but I assume that this point will also involve removing the contract creation rules from the code validation. So the main thing to update with 3670 now is to reject the deprecated op codes, call code, we mostly already accept that that's going to be deprecated, has been deprecated for a long time. Self-destruct, I guess is the more questionable thing. Should this be deprecated? Should it not be deprecated? What do people think about that? So I guess like- I internally express my opinion. It's a call code. I'm definitely in favor of because it needs call code. Self-destruct. My opinion was that because it's still floating in the air. I think a final decision about that can only really be made once there's more clarity what the future of self-destruct gonna be. But I'm leaning more towards just restricting it because it can be added later. Yeah. I was also leaning pretty hard in restricting it but now I'm like questioning it slightly more with these people wanting to use this as like a pay all type of op code. And so even if we do get rid of the self-destruct functionality of the op code and it's just send all I wonder if that's going to be a frustration. I do still think the EVM is better overall without it even if it's just pay all. But I'm not sure if that will upset developers. But I think again, like if we're coming from the mentality more restriction in the beginning with this is something that could be added back if people say, you know, on legacy we have self-destruct to first send all we should have it on UF1 that can be added in the future hard fork. Isn't the only reason why self-destruct is on the legacy contracts currently is because it is probably going to break some contracts in the main net. Yeah. The only reason we are not basically removing it is because we are afraid to break some things but with this format, we are breaking a lot of things. So I think should be okay just to remove it altogether. Yeah. And yeah, yeah. Yeah, I agree. Sometimes you just got to like rip the bands aid off. Yeah. This one was like, should we should have done that a long time ago? Right. But yeah. I guess the argument is that if I mean, there's still a likelihood there's still self-destruct going to be modified in Shanghai, so the same hard fork. And if that happens, then the decision here likely has to be changed in light of that, you know, whatever the decision is going to be. But if self-destruct is left as it is in Shanghai, then I think restricting it is the best way forward. Sorry, if no changes made a self-destruct, if no changes are made in the rest of the EVM for self-destruct, then EOF shouldn't allow it. But I think there's a hope that self-destruct change is going to happen anyway. Yeah. And if it does, then yeah. I mean, this is really just a note, you know, we have to keep in mind to review this. Yeah. In both cases, we are removing self-destruct, but if self-destruct is basically rename to move all. It's like different top code in any case. But yeah, I totally agree with that. Sounds good. Okay. Anything else on 3670 code validation? Hi, Dana. Hi, sorry, I'm late. The calendar contacted it over an hour later, so. Yeah, I think, yeah, I think Tim accidentally put it on at, yeah, an hour later and then I realized this morning. So I moved it to the correct time. Okay. Damus, anything good? No, I think we've just got, I've recorded a couple of the things related to 3540 on this document, but generally this has all been accepted. So yeah, we just were finishing up 3670 if anyone has anything else, otherwise we can go to 4200. Okay, 4200 static relative of jumps, one point here, there's a question, should we calculate the offset of the relative jump from the current instruction position or should we calculate it from the next instruction position, which is plus three bytes, taking account for jumping over the op and its operand? Alex is in favor of the plus three. That's because it's more favoring of jumping forward than backwards. Is that right? Yeah, and the calculation is also, yeah, I mean, in my mind, it seems clearer. Yeah. And also like jump zero becomes a no op in this case. Jump minus three becomes an infinite loop. Jump what? Jump minus three becomes an infinite loop. Yeah. I don't have a preference. Does anybody have a preference for changing the semantics to offset from current instruction? Yeah, I have a slight preference for like the other way because just the VM implementation is simpler, but it's tiny, tiny difference. So I don't really care so much, but kind of this plus three shows up in some implementations. For sure. What do other op code sets do for this? There's nothing really like similar to this one, I think. There's not a lot of jumps. Yeah, yeah, well, I think JVM has had it also from the, like it's calculated from the instruction, I believe, but I'm not entirely sure, but I did some research on JVM when I was comparing the verification. Daniel, do you want to maybe speak from Solidity's perspective, like in a similar viewpoint? Yeah, I mean, the offset doesn't make much of a difference for us. We can generate code either way, so I don't mind that at all. In general, we have been discussing for, or starting to discuss whether we want to have the EIP extended by some jump table jump. I'm not sure whether we want to go into that. We're not entirely clear about that yet, but yeah, regarding the offset for us not to just distract, yeah, it doesn't matter to us. Okay, I don't know how to make a decision on this. Right now we're still at the plus three bytes. I think we should either keep it at that or if somebody is able to point to a few other instruction sets that are doing it from the instruction itself or some other reasoning that we should seriously change it. I think we should just stick with what we have. I think there are two data points which can help make this decision. One is looking at the, once this is implemented in all of the EVM clients, look at which way has more overhead from the EVM perspective. I mean, Pavel said that in EVM one, the current one has a slight overhead. Right. But it would be nice to see it across all the different EVMs. So that's one data point. And I suppose likely we have to prefer the EVM interpreter perspective. But the other data point could be some statistics from Solidity once this is implemented in Solidity, which is planned in this year. So once it's implemented in Solidity, we can collect some statistics of frequency between backward and forward jumps. And that's like another data point, which could help this. Decision. Okay. But yeah, I think the EVM implementation has precedence because we want to make the EVM implementation simpler. Yeah, makes sense. I think there's definitely gonna be an overhead for the plus three semantic, but it's just one addition. So we just have to decide. Yeah, is that worth it? Okay, I wrote that note on there. Anything else on 4,200 that people would like to discuss? Java, JSR does it from the instruction not from the next instruction. So that's one point. Okay. Does anybody wanna just go ahead and say we should change it right now given that information or we should just wait until we have the feedback that Alex pointed out? Let's just go ahead and change it unless we find a counter example because the overhead's in favor of changing it. JSR and Java's in favor of changing it. And that's too, that's pretty strong. I'm indifferent. I'm happy either way. So are there like any other languages or like VMs we should check? I think like WebAssembly is kind of relevant because they have much more sophisticated jump instructions based on labels. So I think it doesn't show up. What's the name of Microsoft's bytecode? C-L-R. C-L-R. C-I-L, that's what I'm looking for. I think we can begin with C-I-L. But the bytecode's called C-I-L. Okay, let's salute back to this after we look at a couple other, some Blila. I think there's one more data point. So what Daniel mentioned, we have been discussing the jump table or like this jump V instruction. And there are many different questions around it. Namely whether it should be dense or sparse, whether it should have the data as an immediate or intersection. This depends on many questions we've been discussing, but it would be nice to get more clarity on that. And I think if you want to discuss that on Monday in terms of its solidity, and that may also have like an influence on this because ideally, if you do end up having such an instruction, it should be following the same semantics as Ericham. So that may be another data point on it. Sounds good. Let's just loop back to this next week. Okay, I guess we will also, yeah, just I'll try to join next week to discuss what we ended up with the solidity usage of jump table instructions and stuff like that. Okay. Yeah, okay. All right, 4,200, 74750, EOF functions. First thing is we set the call F and red F opcode numbers to B0, B1. I believe that's what this change was for. Yeah, B0, B1. That's okay with everyone. Any one against that? Okay. Set gas costs for call F, red F. I don't know if there was a PR for this, but it's listed down here. Red F would be three gas, call F would be five gas. I haven't looked at the instructions enough lately to know, yeah, how these numbers feel, but I assume that they're okay. So the numbers like the last section is kind of, I think they are a bit lower that we originally proposed, but I did try to kind of analyze what's like micro, micro, I don't know, micro instructions to actually the implementation should do. And yeah, based on this, it's like must be like to give some consistency, but they are also like sophisticated. I mean, they are like think lower that some people might expect. So I'm not sure if this is not controversial, but... Yeah. I mean, one thing there from my side, from Solidity Code Generation, there's one of the issues to the EOF functions is that it will make it hard for us to do bytecode level code deduplication because we can't jump across different functions. If the upcodes are as cheap as they are right here, then a conditional jump and they call together are still cheaper than a conditional jump, that a jump I, which will mean that even if we don't find a nice solution for the problem I'm thinking about next week, which we will try, even then it will not be harmful in that it will be more costly than what we had before. So that's an argument for these low values to be nice, but yes. Yeah, that's one piece of feedback I've gotten. And yeah, we'll talk about it, or you guys will talk about it more with this jump table instruction, but some people were a little upset that the relative jumps because we can't jump to the dynamic location anymore was more expensive for their optimized bytecode. So keeping it cheap makes that a little bit better. Yeah, we were actually talking about three features in total over the past like two weeks. So the jump table is one big conditional call is another one and tail call is the last. Are these things that you want to try and also put in the suite? I think we want to do a bit more analysis from Solutius perspective to find out which is the best of these. Ideally, we don't want to put all of them because we still want to keep the surface relatively small. But I could see the tail call or the conditional call, one of those two to become part of it. I'm leaning towards the tail call. And the jump table obviously would be rather useful for many different cases, but that requires more analysis from Solutius perspective. Yeah, I think we could live if we had to without either, but I think both would be very good to have. So, but yeah, we will get back to you once we played around with it a bit more next week. Okay, sounds good. I think like just generally, the sooner we can have like some sort of spec or something introduced Teleport Devs, the less that people will shout about more EVM changes going in. So just to like keep that in mind. Yeah, my plan was to draft up maybe not any AFI, but at least a spec for these instructions. And we want to have some preliminary idea if we're next all codef next week. Yeah, whatever you wanna propose, these are not. That would be great. Okay, that was the gas costs. Next one, redefine code section header to be an array of code section sizes. This, I think I open this PR. Yeah, I think, I don't know, you guys can weigh in, but it felt like the way that the EOF header was evolving was partially due to the way that they were written and the anticipation that some things would go on before others. But if we think about them all as the atomic unit now, it feels like it makes more sense to have the code section size just be an array. But... And there's like different... Two is a repeated field though for functions. Two is a repeated field. The code type. Codes repeated, isn't it? I don't know if I follow. There are currently, you're allowed to have multiple code sections and it will still be allowed to have multiple code sections, but in the header of the EOF container, each section is like an individual thing. And so you have the section prefix one code and then you have the size of the code. And I am proposing that you have, instead of multiple one prefixes code size, you have a single one prefix for code and then you have a list of code sizes, each two bytes. And there's two ways you can know the end of that list. It can either be that we enforce, there'll always be a type section header. And so then the size of the type section, you would just divide that by two and that would be the number of code sections you read or you could have like a no-by terminator. When aren't... Isn't this size field itself fixed length anyway? So you can just divide the size of the... Okay, yeah, it's the header. Yeah, I think you can do that. You wanna mix the type and the code. Was that discussed? I mean, do you have it in a single header? Yeah, I kind of proposed that, but I didn't actually dig into like, I think this kind of, I think like technically makes sense, but like we lose, but just like this one level of abstraction that did the way the UF headers are defined, like it's very simple, that just the kind and the size of the content. So the type sections has actually the types in the content and for code, we can't do it. So yeah, but because like we always require the type and the codes go along and we kind of matched. So we can actually combine these two. Like this, I think number of options you can do it. You can even put the type in the content like prefixing the code. That's maybe also annoying because I would have to remember that the first two bytes means something different than the code. Yeah, I don't know, I think... Yeah, it might be better to keep them separate. Yeah, I think there's like multiple options. I didn't actually put a lot of time into it yet. So... Okay, I don't know if we can make a decision on that here, but yeah, there's lots of different ways we can do this. I think we should definitely consider different ways. I think the important indicator of which ways are better would be like the parsing complexity of it. So it would be great to keep the parsing complexity low and also keep the total size and like redundant information low, but I would rather have a bit of redundant information if it retains like simplicity of parsing. Okay, clarify if the data stack has less than caller stack heights plus the code section's outputs, then execution results in exceptional halt. What happens if the stack height is larger than the number? I don't really understand this one, honestly. If the stack height is larger, does it not just return? Yeah, that's a complicated one. Yeah, so it's mostly about the red F instruction that adds to the function execution. That's related to like the validation as well. But yeah, for this like EOF functions, EAP, we don't have the strict validation yet. So mostly you need to have, in the type you have the number of outputs specified for a function. And so you need at least this number of outputs on the stack when the return F is executed. And the spec says that it has to be at least this number, but if you think about it, you kind of want to keep the top stack items as the output, right? So it seems that return F would need to kind of modify the stack so only kind of keep the top arm and remove some additional at the bottom or something like that. And unless we keep it like this and like somehow it goes, but I'm not sure, maybe you can discuss it later offline. Or we kind of will go back to this when if we have time to go to the stack validation because it's kind of repeated there as well. And I think on the strict validation, what we ended up with is that we want exactly this number of elements on the stack so far. So we can kind of make it the same restriction here as well. So not like it has to be above or equal, but it has to be precisely equal. And then the implementation is kind of three row. Yeah. That of course shifts the complexity to us, but it's fine. We've been doing that always anyways. Yeah, that's correct. We kind of aware of that. But I think the kind of the resolution was to like see how the code generation will handle that, like how bad it will be. And then we can always be fine. I mean, all the functions we generate so far already have exactly the amount of additional outputs and stacks. So it won't make things easier for us, but it will not make it harder than they are before. Okay. Okay. Yeah, that's what they heard. And that's also like the next item, which is I think. Yeah. Same. So. I'm sure I explained it as well, but I think I can handle like both these cases later with some proposed change. Okay. Let's, let's do that. And just keep going. And we can revisit it a bit after you. Yeah. Do a little work on it. Okay. So let's skip this one too. So the next is limit number of functions to 256. So that we can use a single byte immediate for call up. That seems okay. Honestly, I just, I don't really have a way of making a decision on that. I don't know if anyone has thoughts on why 256 is okay. Yeah. So that would limit total code size to about a mega and a half. I think. I don't know if that's good or bad. Oh, because of the max code section size. Yeah. I think I did the math. Let me fix it. Well, Megan a half would be in almost an order of magnitude more than we have now. No, no, I think that's the confusion we already discussed. Like the code size limit kind of wraps all of all of it. All of the EOF container. I think that's right. Well, currently there's the limit, of course, by the EVM, but for the container, the largest, it's actually 16 megs. The largest possible container would be 16. Okay. Okay. Okay. And 16 megs should be enough for anyone. But it does kind of prevent the high hyper optimization of having like a lot of tiny helper functions. I mean, possibly. This is yet to be found out with the solidity implementation. How many high efficient is it gas wise to split up because currently solidity already has an insane amount of helpers in the current generation. And then those are in line then and potentially reduced and deduplicated currently. But we had to see, you know, what strategy would work the best with the function sections? I mean, depending on the gas options there. The duplication may end up generating small functions in large quantities. So if we end up having duplication by, yeah, block duplication by generating functions and calling them, even though it's just the tail of a function actually, but. So yeah, I'm not entirely sure how many functions we will actually end up with on that average in the end. That also depends on the gas price. I would say if it's efficient or not. It feels like it's 1000, right? That's I think it's somewhere there. So the current limit. I think this somewhere. I'm not sure exactly, but I think. Andre put a limit on this number of but it's 100. 1000. 24. sections. Yeah, fix it. Yeah. Okay. It seems like most people are in favor of this. I will say that like my uninformed take is that two to these six feels like a small number. We don't have gas costs. That's the five for call it from return F either, which I think it's a question of cost there. I don't know. I don't know. Yeah, that's like. I think as low as we can, but I expected. It might be controversial to do it this way. Although I, if you. It shouldn't be cheaper than jump. There's no way it should be cheaper than jump. Yeah, it's, it's one more. Jump it. Jump and jump. I or eight and 10. Yeah. But jump and jump. I require the jump test analysis. Right. But you still got a. You either have to keep all the code in the same memory and change your offset, which, you know, it'd be a cheaper way to do it or load in a new code section and memory. I mean, it's. I think there's a bit more than one gas of work to do for a call. Cause you also got to check stack arguments. Do you check stack arguments? Is it not. There's no, I don't, is there checking? I think you just pop it off and then jump to that section because it was validated. Yeah. You can, you can pre validate that. Yes. Yeah. So. Yeah, let's think about like the gas cost. Okay. Yeah. But I don't know, like my, my understanding is like, this is not strictly. In favor. So maybe we'll keep it. Keep it as 1024 for now. No, like the current one, like two bytes immediate. Yeah, that's how it's implemented already. Right. So like, we don't have to change it. If we are not sure. And we can look back to this. Okay. All right. We've got 13 minutes left as well. So. Yeah, let's just keep that in mind. We have a couple more things to get to 54 50. And I think we should definitely talk about this one since it has some of them. Most criticism right now. Okay. Anyways, reject to jump, jump, jump, I jump desk. I mentioned, I think it might be good to expand the rationale a little bit on this. Just like, what is this really provide? Why do this, et cetera. But. Everyone's favorite. That's the first, the first moment we can actually do it. Yeah. Because we have replacement and like the benefit is, is you don't have to do a job. This analysis. Right. I think it's just be good to say that like explicitly in the EAP, just for people who don't have as much context. I think, like, that's, there's no even pre-quest for it. At the moment. Yeah. I guess, yeah, everyone is on board with that. Yeah. Okay. Next one replaced jump desk with no op. It kind of already is a no op. It's just happens to be a jump destination. Yeah. I think that's exactly what's. Like. So yeah, we can just, like, instead of actually. Removing jump this as a invalid, we can kind of reassign it. Like change the name. I think we can consider it offline. And the same for the. The PC next one. Like I had it somewhere in the notes. So you won't be able to observe like where you're on the code. But we didn't do any black analysis. How this is good or bad. So let's keep it for now. Okay. Yeah. Sounds good. Reassign you a section. I need to be in order. That makes sense to me. So we make type section mandatory. I think this makes a lot of sense. I think the reason it wasn't mandatory was because of the. Potential of different things going at different times, but if we are doing them all at once, it seems like extremely unlikely that we'll have a single code section deployed. So there's actually some arguments here. Some charts from Viper left some comments on the intermagicians and also reached out because his comments were not answered. And this was from a wide back. What he asked was why even enforce a code section for data contracts. Yeah. And you know, in that case, I mean, the type section is again, just an overhead, but if you have a single code section, the type section definitely is an overhead. Well, it depends on the validation, right? It depends on the next section here. But also this discussion, whether we're going to have the type information. In the code section or not. Yeah, I think there's a lot of questions around this in any case. Yeah. But the optimizer community definitely. I would ask, you know, why have this if it's not used. I think it doesn't really matter in terms of. Like validation, because we just have like implicit type there. Like for that. So it's, I think it's like, mother of encoding this information, but it doesn't really change how it behaves. What's the next way to. There's no time to make a lot of space accessible from inside the EVM, right? No. Okay. So there's still an argument for data contracts. But is it really that big of a deal to have a single code section? I mean, there'll extra overheads like five bytes total probably. Seven. Seven. So it makes the parsing simpler. Cause you don't have multiple cases. Yeah. This information is available to. Contracts. If they want to parse it. Because they can access the bytes. With that. What's the best way to. Make a more informed decision on this? Like, do we really need to like sketch out the different approaches at this point and just see, compare them and see if anything falls out of that. Or there are other things that. If we answered, this wouldn't become clear. You know, they're fighting for is like what 28 gas. Or. It's zeros mostly. We're fighting for less. Gas. With respect to contract deployment. Right. Yeah. Oh. It's 200 to bite too. So. But still we're talking about small amounts of gas. Optimize. Okay. I think like I can maybe write up a couple of different approaches for doing this. Unless somebody else. Would rather do it. I don't know. I don't know. I think we can maybe write up a couple of different approaches for doing this. Unless somebody else would rather do that. And then we can compare them. I'm personally in favor of. Can we get more strict. To make parsing. Easier. Optimizing for runtime. Cost. Yeah. At the expense of the storage cost. But, you know, maybe there's something which can be also plotted at. At the expense of the storage costs. Which is more important. Are they. Do they really want to optimize for. You know, storage costs. Instead from like a blind perspective. Yeah. I think I'm on the same page as you. So I can write a couple of different ways of doing them and look at the differences a bit. And yeah, let's talk about it again later. Okay. Yeah. Yeah. I think the 1024 stack limits still applies. I think that this makes sense. And yeah, just a simple change to 4750. Just clarification. Okay. Six minutes. 5450 stack validation. Yeah. This I've heard the most criticism about like obviously right now it's not something that people are considering as much as part of this like full EOF suite. I realized that probably everyone's call. Yeah. It's not considered for the next dev net right now. And Martin has probably been the loudest person who's had criticism of. This specification and how ready it is, et cetera. So I don't know people have thoughts on that. Just a quick question. If we don't, if we don't end up cleaning the stack on a terminating instruction, where are we going to store the components located on the stack? Actually, I didn't understand the question. Yeah. I had the way I understood the question is like the stack elements just still exist. From the like parent call frame. There's no need to clean them up. Yeah. So like very quickly explained. When you, when you call it the function, then you get like. Subset of the stack space available. And. And the idea is that when the function returns. It has exactly, it leaves on this. Subsection, exactly the number of items that is specified in the function type. So the color know exactly how much. Items will be on the stack after the call. So it's kind of explicit. And it's a matter of if this is verified. On the deploy type or it's checked. At runtime, but. The behavior is kind of. Bit more strict. One point I wanted to mention is that. I think. You know, with respect to Martin, something that would. Make him feel better about these things is if we had more. Motivation for 5450. In terms of what you can do with compilation. Because he generally feels that 5450 probably isn't worth it. If like the only thing it's doing. Is reducing the number of. Checks in the interpreter loop for underflow overflow. And so if we can. Provide some motivation that this makes things better. You know, future worlds. That would probably alleviate his concerns for that. So something to consider. I mean, the biggest cell is for future jitting, but. Right. That's. You know, Greg's, you know, not in favor of jitting because it feels there's concern for logic bombs, but I think. A single linear analysis will remove a lot of the logic bombs. There's always, you know, multimodal interpretation to you can make your compilation and your interpret your jitting code. The same time. But I mean, there's also the, the. Some people feel like, especially in the move community. The disputed interpreter is secondary to data access. So there's we're also fighting upstream against that. Yeah. Is there anything other than future jitting that we could end stack reduction than we could pitch for that. I don't think there's like much more to. To find here. It's like, it simplifies the. The interpreter at runtime, but. The, the, the, the whole performance gains are not really big. And. It's like my kind of like different thing. Thinking about is that we do need to. To do a code validation anyway. And. Right. I think that's additional pass to like, to, to more validation. And this is, it's a bit like, like. Kind of feel like how, how much we can push it. And this seems to be like. The final step. I don't see anything about that. But yeah, understand this concerns. Maybe it's not worth it. But I, I think it's like either we get it here. Or like. We will not be able to do it anyway. So. Because if it doesn't go in the first few, it's like, it's definitely not worth it to introduce it later. Like to have the contracts that has to obey it and the contracts doesn't have to obey it because they are. You have one. But yeah, okay. And I did, I did push like a newer vision of that, like it has the tech spec. So it doesn't have to follow Python code. And I've like, if you really do it well. I'm kind of advocated by it, but that's not my intention, but you can kind of replace the, the previous EAP. With only this one, but that's, that's mostly like how to organize the specification for it. Yep. Okay guys, we're at the top of the hour. Any final comments, questions. Things that we need to be thinking about before ACB this week. Next week. Can we add more restrictions to stack validation? I can draft up a doc for it, but if we could require that. Any instruction following a terminal instruction. Must have been referenced by a prior jump, you know, could word Smith it. We can do single pass and we can do dead code checking too. And we could do all this validation and one loop and pass through it once and get all this. Okay. Yeah. I mean, if you want to write something for that, I think that would be great. I don't, yeah, I don't have enough context to know how likely something like that is, but it sounds useful. Okay. Thanks a lot, everybody. This was helpful. Thanks a lot. Paval Alex for coming up with the list of open issues. Yeah. Still a little bit of work to do. I'll, I guess like one other thing is that Mario Vega has been working on some more cross client tests. I can post the link in the discord, even discord channel. So if you are a client developer implementing this, there will be some more tests coming out soon. Yeah. Anyway, looking forward to hearing the outcome of your guys, this conversation with solidity and yeah, let's keep chatting and even channel. Thank you. Bye. Have a great one. You too.