 Who here wants to talk about an EIP? Just like raise your hand so we can like roughly keep track of time, okay? Oh, if you're in the back, it's not gonna happen. You need to move forward. Yeah, yeah, yeah. Okay, so, sorry, I didn't count because these guys were about, again, okay, one, one and a half, two, three, four, five, six, seven, eight, nine. Okay, so we get like five minutes each. It's hard to like cap that. Okay, I can like keep the timer. Yeah, can roughly keep track of time. Should we start? Yeah, okay. Yeah, um, okay, I'll start on this side. Hey guys, I'm Sarah. I'm a smart contract engineer at Uniswap. And I'm here with... I'm Mark, and I'm a protocol developer from Optimism. So we're gonna... We're in, we're done. Should I drop the mic? Cool. So we're gonna kind of tag team this EIP today. Yeah, first of all, I want to thank you guys for hosting the session. I think it's super important that, you know, when we're planning for building open-source software, we're really bringing a lot of diverse perspectives to the table. You know, I think it's, you know, a lot of times client devs and core devs were, you know, focused on this really long-term vision for Ethereum and unfortunately for application developers. That means some of the stuff we want to see does not get through. And so hopefully, I'm here today to convince you that this is worthwhile and actually this EIP really will complement the kind of future vision for Ethereum. So let me hand this off to Mark to give a little, a little rundown. Okay, so we're here today to talk about EIP 1153, which is transient storage and this EIP adds two new additional opcodes to the EVM and this concept of transient storage, which is basically like a key value store per account. And any time that you, you know, you T store, you T load, which are synonymous to S load and S store, but you instead of putting it in the state, you put it into this transient storage map and each one is namespaced by the account and it persists throughout the duration of the single transactions execution. Yeah. And so, so actually we have this concept of, you know, kind of storage is used sometimes in a transient way right now in the EVM. You can see this sometimes with like re-entrancy locks, right? So this is when we clear a slot back to its original value before the end of the transaction. And then, you know, we're allotted some amount of refunds. And, you know, to achieve transiteness in this way is actually quite messy. From, you know, the developer point of view, you know, it's really not straightforward, you know, how the accounting will work for this, especially because refunds are now capped. And so, you know, enshrining this directly in the EVM is a more direct kind of use case to get transientness. You know, also developers are kind of having to go and do this sort of messy implementation where you'll see a lot of times like, you know, 101 instead of clearing to zero, we're clearing to some dirty value because we actually end up getting, you know, more refunds in that case. And so it's really this sort of patchy way of achieving transiteness in the EVM. And so kind of the way that we look at this EIP is actually sort of a cleanup. It's relieving some of this tech debt here because this is a real use case and it is wanted. So one really cool side effect of this is it actually helps a lot with parallelization because, you know, we want a scale of theory and we want to increase the throughput of the system and we can do that by paralyzing the EVM. And right now, anytime that there is a lock taken in a contract, it is writing the storage and that prevents parallelization of that transaction with other transactions that are trying to interact with the same contract. So if we move all these into transient storage instead, then we'll be able to paralyze a lot more transactions. And it's important to try to get this change in sooner rather than later so we can start adopting this pattern now and have more of the network using this kind of way of doing locks so that we can have more parallel execution in the future. Another problem is that it's really difficult to know how much gas or how much gas is going to be used when you're allocating memory because there's this like crazy nonlinear function. So this makes it much more straightforward because every, you know, T store uses the same amount of gas. So it's easier for a developer to like know how much gas they're going to be using when they're writing their smart contracts. Cool. And kind of on the last note here, I just want to reemphasize that this is not necessarily an addition to the EVM. It's really, we're thinking about it as a cleanup, a cleaner way of achieving this use case in the EVM. I also want to point out this is a really siloed change. It's two op codes. It's easily testable and also benefit. It's already been implemented across four clients. So Nethermine, Beizu, EthereumJM, or EthereumJMBS and also Geth is implemented. We've also written tests for this. They all pass. And so the kind of the final ask today is to just really get some more client dev eyes on these PRs on the tests we've written and to actually kind of seriously open this conversation of having CFI 41153 in Shanghai. Thank you. I guess just to make sure we have time like, are there like a couple client devs with strong opinions about this? We probably can't do everyone, but okay, Daniel first. I have a meta opinion on this. I think it's awesome that people from outside the core team are not just writing the spec, writing the implementation, but also writing the tests. I mean, big round of applause. So, only not just FYI, I do like the EIP. However, I kind of have a feeling that the rationalizations are not necessarily all equally valid. With the parallel execution, I think that's kind of a far away dream. So I mean, essentially every time you execute a transaction, you will touch some states. So if you touch the same contract, you will touch some state anyway, so that I don't think it helps there. However, I mean, personally, I think it might be nice. One thing it would be, well, let's just keep it at that. I don't really see the point, but I'm also not a contract developer. It seems to me like a nice to have, but nothing that's critical for us right now. So I can rebut that. One of the points that I do like about it is that with these mutexes, essentially it touches the state, and even if it does nothing, just flip some bits back and forth, it still has to touch disk. And this would allow us to do these things without touching disk. So that's not benefit for me. Have you talked to the Solidity team yet? There is an open draft PR with the assembly opcode in the Solidity repo right now. Yeah. So it's really parallelization that has been quite a lot of work in analyzing how much the transaction can be parallelized. And even at FlashBots, we were running the analysis of the clashes, the bundle clashes, the transaction clashes. So I think any improvements on this would be quite nice to see. There are some builders there. I've seen actually the modifications to get that were introducing parallelization, and they were working, and I was really surprised. Some developers were able to do that in their own implementations just for the simulation efficiency. So they're already gains from parallelizing transactions, such as the implementations that are so complex and so specific for searching. They are not coming to the journal view. So it would be great to see that. I think the pitch was really great. So I was not considering this one because I was always thinking that anything that was touching the storage was awful amount of testing and huge risk of something escaping and some contract breakages. So this is my biggest worry. We were modifying the cost of storage in the past. We're modifying the behavior of refunds. And this one feels a bit like that. Like when you say, oh, it's a known cost, but do we cap the storage? Okay, I see. Yeah, refunds are capped. No, the storage, the transaction storage, because if it's not capped it again, it should be exponentially growing. It's the same. It doesn't exponentially grow, but we looked at it to see kind of the upper bound and it seemed like it was safe. But it's the same with memory, right? Yeah, so with memory, we have the exponential cost and here we have the linear one. I guess my question is like from the client teams very quickly, is there something like you want to see from here like a big open question you have about this? I know you have the solidity question, but like, yeah, just to wrap it up, anything you wish you would see from this, that would help you better understand. Do you have tests or benchmarks for like just writing as much to the storage as possible, writing like small chunks into it, doing separate calls into different contracts that each write their own? I would really like to see this. Yeah, there's an open PR in the Ethereum test repo. I think most of those cases that you just said are covered. I haven't looked in a bit, but yes, re-entrancy I think is covered. I'm happy to share that out with the wider group, but there's extensive tests. Something that I did want to do is use Felix's mem size library and basically just like look at the gas limit and just like, you know, fill in as many things as possible that would fit in the gas limit and then like observe the size in the implementation itself. I have that like on my computer someplace, but it's like not really pushed, but that is like something that we could add to make it better. You wrap this up to the next one. Oh, one last question. Okay. I just had a comment on the solidity part. Adding it to the assembly, well, the PR, there wasn't a PR, we just edited it, but it's easy to add the opcode and you can use it in an inline assembly. And if the IP goes live, that's going to be added like instantly. But adding it into the language going to take quite a bit because it's a lot of changes. I don't expect it to happen anytime soon. So if you guys want to help in implementing it, which may take like 1000 signs of code, that would be welcome. Okay. We need to wrap this up. I'm sorry, but Igor, you're up next. Thank you. My name is Igor Ilovoy. I'm a strategic developer and I'm here to champion EP. 39 78 titled gas refund and rewards. So motivation for this EP is that reward of a transaction or any its subcalls drops any state modifications. But the user has to pay the full price for these state modifications while these state modifications are not preserved for forever. And this has two problems with this is that users overpay and then it limits some solidity patterns where you may have a call and revert it. And it makes it extremely expensive to the point where sometimes instead of reverting the call, you may just like transfer easy tokens from one address to another just to restore the storage instead of paying the high gas. price of reverting the call. And in my opinion, it's an anti pattern because some side effects can be missed and it may result in a critical hacks and lots of funds eventually. And this EP suggests to reprise the following OP codes as a as storage create cell, cell destruct through the gas refund mechanism. So they're not going to be free. You still going to pay some price for touching these addresses or storage variables, but it seems unfair to pay a full price. We see Pisa in the in the early version. So I just want to spread more awareness around this problem and to hear what kind of comments people have. Thank you. Question. I'm maybe I missed something, but is the suggestion essentially to reprise some of these storage outputs or sorry for clarity. So if, let's say, if as load operation, not as load as store operation inside the reverted transaction costs 22,000 and then with this call reverts, then the cost of this operation should be revised as just like a touching this load for a different number. So you should not pay 22,000 for modifying a storage slot, which is not modified in the end of transaction. And the pricing should happen through the gas refund mechanism. Does it bring any clarity? So I have comments about like the original idea. So it mostly means you would need to remember all of the changes you've done. And then it's like single point when the reverts goes, you would need to go through all of the changes and apply a compute at least the refund you get from that. So like you have like single operation that it's actually like unbounded in the complexity internally. And that might be like doable if you have this journal journal implementation of the state, because you do also there, like, unjournal the changes. But if someone has different implementation of that, it might be really difficult to implement. And just one more comment that usually with when revertals, you can actually have nested revertals and that can end up quite nasty when you have subcodes that revert than the outer core doesn't revert than the outer core again revert. So it's not a very, very easy EIP to tackle. It might not be too hard, but it's I think the reason about it is not necessarily easy. Yeah, I think if like that is assuming that we have to keep the journal all the time and use it for all the other operations and it kind of did this idea fits into that perfectly. I think that might be considered, but if there is something that you would need to keep different data structure just for that, I think it's would be really difficult to have it. And I know some of the implementations actually don't use the state journal, right? I mean, that's not something we mandate to be used. So you would need, like in other words, you would be forced to keep something like that anyway, although it's not required in particular right now. You have to keep track of a whole lot of information for the refunds anyway, for the storage refunds anyway. So it's the same stuff in all the implementations. I think whether it's journal or cash, you have all the info you need. But I'm just still traumatized by some of the code I had to write for when we were doing the repricing of the 2200. I can't remember exactly the name of it, where it was overfitted to get and our code analysis tool freaked out at the complexity of the code. So I'm kind of concerned about that, about the maintainability of some of these. I mean, I was implementing the algorithm as specified and so on are said that's too complex, you can't do it. Actually, it was the same EIP transient storage. So without maybe just reviving everything and reiterating everything there. It's just kind of adding on a few points. One, right now some of the higher level background. So I'm actually a contributor to the Huff language, which is a low level assembly language and then formerly I worked with the superfluid protocol. So I think both of these could benefit from transient storage. So on the Huff side, higher level languages like Solidity and Viper, they have these re-interrency locks or modifiers to facilitate re-interrency locks that are very well built and it's very easy to build this in a safe and secure way. But when it comes to assembly languages, this is actually a problem because if you set a lock in storage and you don't explicitly free it by the end of the transaction, those are now correct, which obviously you should catch this in unit tests, right? But it's just, it's one more footgun on the stack. Sorry to interrupt you. I think you're kind of capturing the governance process right now because we already talked about this EIP and now rehashing the discussion is kind of, I think, not great. So I would rather move on with the next EIP because we discussed this one already. Yeah, I think we should move on because we have it done. I do think though it is valuable to know that low level languages and projects are like this. If you can write it on the ETH magicians post or somewhere. We can also, if all of the EIPs were done, we can also. Yeah, yeah, we're not going to be done already though. Sorry, so I was going to say, yes, we probably don't have time to rehash it. But it is valuable. I do think like putting it in writing on the ETH magicians post is probably good to document. Yeah, sorry. Also, the microphone don't be scared. It's mainly for the recording. Okay, cool. Sorry. Do you have any IP as well? Yes. So I'm working on 5027 and also another IP 5478. So I just discussed 5027 first. So basically it aims to remove the right now the contract name it as 24 kilobytes that was introduced in EIP 170. So the reason is like I think the motivation is pretty clear that a lot of people complain regarding 24 kilobytes contract size, especially right now the contract is significantly much more complicated when EIP 170 was introduced. So the major concern of EIP 170 is basically DDoS attack. If a large contract maybe fixed 100 kilobytes that was deployed on a theorem, then if it just charging using a flat fee like for example right now is 2,600, then it may significantly under charged. So basically the idea is like right now the solution basically is that split the contract to multiple contracts and then I call kind of like chain of contracts so that I can retrieve all data. But it basically make the whole logic much more complicated. So the current my idea of solving this EIP basically the DDoS attack. The couple two ways one is basically introduce basically contract call hash versus the to the contract size. So when we call a contract then we immediately to know basically the size because size is very small like 4 bytes number. Then we are able to pre-charge according to for example what the actual size of contract is. Maybe we can just pre-charge to 2,604 per 24 kilobytes so that we are able to basically have similar right now gas behavior of calling multiple contracts but just put in a single contract. So this is one idea. Another idea is we can if the contract size is greater than 24 kilobytes then we are able to append a size together with the current 24 kilobytes. And then tells you what's the actual size it is. And when the first time it cost is first charge 2,600 about the first 24 kilobytes and then the contract size. And then once we know contract size then we can further charge the corresponding the rest of contracts and then put in a memory and then execute. So basically this is the basic idea when we basically explore the EVM code and also I have a simple basic implementation together with some concerns of addressing warm and cold storage. Also a big concern of the P2P package size because right now we have a limitation on the P2P package size. But with for example 50 million gas block gas limit divided by 200 gas price per byte. So the contract limit size actually is essentially limit to 250 kilobytes. So right now that's still fit into the P2P package. So the basic is a couple of concerns that regarding this if you are able to remove this limit. Happy to. So there's also EIP 3970 not 3978 3860 which is to limit meter and knit code which is like we got two conflicting EIPs. So I'm not opposed to changing the limit but completely unlimited. I think it has issues. I think it's Martin that has some really glorious code that shows performance problems with the current jump analysis on the legacy formats. But another thing to consider is what if we change the code to require to be done in the EOF and it's the code limit it applies to. And then we could reconsider how much data you could bring in that is not subject to this because you're not supposed to do the jump analysis on the data. The jump analysis is different in EOF too. There's not the same risks. But then as you mentioned you get into the issues of how does it impact the storage bringing out you know kilobyte not kilobyte megabyte code out of the storage. And I think the limiting is going to be a hard sell. I think changing the limit I think is going to be an easier sell. I guess my question is kind of similar to this that saying that 24 K is too small. I can definitely accept that. I think that's a valid concern. My question is what is reasonable because if if you go towards saying that while it should be arbitrary large and it will get so complicated that it definitely won't ship. But for example saying that while let's raise it from 24 to 64 that's that thing can be analyzed. There we can put the number on it. It doesn't mean that it won't require additional changes but it's relatively simple to understand the implications. The moment you introduce these dynamic changes I think that's that's not really going to fly. It's going to be too complicated in my opinion. Yeah. So because I do some experiment and I'm using the code so right now I feel like pick some in synchronization and put large deploying a lot of like 200 kilobytes of countertop that looks like everything is working fine on my test net. So has been running for more than half years. Yeah. Especially like right now the gas mid regarding the jump analysis right now the gas metering is charged for 2,600 per 24 kilobytes which essentially I think the center is equivalent to I call this contract and call another one this kind of another one. So basically I charge this the chain of this calling to in a single but using a single payment. So if we tie this to EOF you can have larger contracts if you do it in an EOF container we simultaneously solve the jump test analysis problem and reasons to motivate people to use EOF. So I think there's a lot of things we could combine and trade some more just to make this work. I just a minor point of trivia over why 24 kilobytes was chosen as well. It was a convenient representation. I think it was two to the six plus two to the seven and at the time of the gas limit it wouldn't have broken any contracts because it was impossible to reach that. So I think it's not possible to do this on on main net right now and I think it's also not possible to to increase the code size. If you're a client deaf and you're interested about my reasoning come talk to me afterwards. What is possible is to do it in EOF and I think that's what you what you should strive for. Do it in EOF when we have the jump this stuff. I'll get asked commentant. Just a tiny tiny bit of comment you mentioned that you had a test set up and running and it's been running perfectly. The with all these changes the catch is it's the average case we know that it runs perfectly because the code is written well. The problem is how attackable it is and in your private test that nobody's going to attack it. Thank you. Thank you. Did you have any IP you want to talk about self-destruct. Who's the self-destruct guy? Okay. Hello everyone. I'm Proto. I work at Opelabs. We have this dream of serenity. Serenity included proof of stake and sharding. We have achieved proof of stake. I want to continue with sharding. I'm very bought into the Ethereum for this combination, not for one and not the other. And I think right now the process with EIPs has been like kind of in balanced with the merge because we have an exclusive layer, contents layer. I think an EIP that does both of these and actually does touch on the testing infrastructure is the thing we need to repair it. Right now the merge was I think still shaped in a relative hurry but with scaling we have actual incentives outside of just client teams to improve this infrastructure, to improve analysis of Ethereum, to improve integration testing. And so we can get the best of both worlds where we can improve Ethereum and we can improve the process that we have to accept EIPs and so that we can be happy to enter future EIPs without as much concern because we have the right testing in place. And then outside of testing and the whole process, just the case for for it for far. If you're not familiar already with for it for for it increases the data for layer two. Layer two is meant to be an extension of Ethereum. You could think of the previous sharding dream of Ethereum as this execution sharding thing where it was ever all the complexity lived on Ethereum itself. Layer two enables this to be more competitive and to be split from Ethereum where we have exclusive layer as layer two and we have the layer one just focus on the securing data credibility. And this is what this EIP focuses on and achieves and then through this means we can adopt a lot more Ethereum users onto layer two and projects like like Coinbase or other like larger Ethereum users. We won't have to look at these Ethereum killers in quotation marks where we can actually host these users at low cost. I'm just because we talked about it a bunch before the client teams had anything else to add or numbers, numbers, numbers. Okay, you heard it here. Yeah, just I want to make sure we can get to as many people in the next 20 minutes as possible. Ert Dankrad, were you going to add something on? I think I've already said the main things I wanted to say, but yeah, once I've destructed I think Mario says the pitch. Do you want to do the pitch, Mario? Or do you want to listen to more of the other pitches? We should remove self-destruct. Yes, that's the pitch. We agree. And we need to remove self-destruct for vertical and history expiry and state expiry and all of these upcoming changes. So it needs to be done. The question is do we do it now or do we do it later? And I think it's a really small change, so we should do it now for legacy and EUF. So just to, I think if somebody is not really on the page of why we want to remove self-destruct, essentially every single opcode on the EVM is the cost is linear or I mean tries to approximate the actual execution that the resources it consumes. And self-destruct is one of those opcodes where deleting the contract story essentially it's a single opcode call but it can result in an arbitrarily large execution. And currently the only reason why currently it works is because self-destruct assumes that clients represent the state in a specific way, in the Mercant Patricia way. And it also assumes that the state does not get deleted from this, it's just a couple of branches of the Mercant Patricia gets updated. But the moment you want to do something fancier like what Erdogan is doing or what Gats New Pruning is doing, essentially self-destruct will become a completely unbounded opcode. And that's, it prevents us from going forward with inter-implantations. Imagine self-destruct on USDC. Yeah, to, sorry. And to like make this add to this like if you want to be stateless it would be an unbounded number of state changes and that completely kills statelessness. So yeah, I have only the comment that the client self-destruct has a quirk that you can destroy if with that. And the question is if we want to actually like to make the send all work the same way or we want to kind of fix it and make it more intuitive, I think. So I think it's kind of the choice between like more backwards compatibility between something that is more obvious how it works. So the way I implemented it now it just, it doesn't destroy the ether. So it did. And the idea for everyone. So like defined by the implementation, right? Yeah. The first implementation was. And for everyone in the room we are not trying to remove self-destruct but we're changing it so that the self-destruct will just send all of the ether that is in the contract. And the contract itself will stay. And so it's like it will keep the current way. The only thing that is kind of iffy about it is there's some pattern where you self-destruct and create to contract but there have been an analysis about it and it doesn't break too much stuff. And we talk to the people that we would break with it and they seem to be okay with it. How did Gasto can take it? Okay. Beside you, Proto. Sorry, I don't know your name. You'll have to take that. That's Mike. Ronan. I basically build on-chain games. And basically also with the application, many things actually. And by that I mean application or game that have a zero backend and where the user player provides their own node through the wallet they choose. And in that context I am building an indexer that run in the browser. And so you can fetch the logs and it will all fine. And but some application or game rely on time information. And most developer assume rightly that the timestamp is available and so they don't need to add the timestamp in the event that they emit. Unfortunately the logs don't contain the timestamp information. And so in my game for example like 20,000 event I can fetch them very quickly like in 5 seconds all the state is synced. But if I have to add the timestamp then I need to make 20,000 more requests. And I can't even batch it because EIP 1193 which is the only interface I have cannot do that. So the idea is a very simple proposal is simply to add the block timestamp in the logs object when you query the logs. And actually someone will also say we could also add the timestamp to the transaction received etc. But basically adding the timestamp information. So one of my questions here is that the long-term Ethereum attempts to remove access from old chain segments. And ideally I would also completely remove access from accessing logs that are older than I don't know. I would remove I would say a month, three months something fairly high. So essentially I mean is it already a consensus because I feel we are talking now about another thing. I understand kind of what you mean but it feel like. So what I was getting at is that this is kind of a consensus in Ethereum that the past chain segments needs to be pruned otherwise the network implodes. And from that perspective the amount of logs you will have to access is more limited. So it might not be that big of an issue. I mean you could always retrieve the timestamps if you have a bounded number of logs you can access. Don't you think that if we go to that stage the wallet interface will also evolve with a different mechanism so that the application can remain decentralized? Or are you giving up on complete decentralization from the application point of view? I don't understand what. Because like most applications we kind of as a developer we understand that we need to index the data and that's why we use event. Okay so I think events are completely being misused and they are used as a database instead of events. And in my opinion Ethereum should use it as events and everybody else should adapt. But that's my two cents. What do you mean by using as events? So by event I mean that the app emits something and anybody interacting with the app can react to it within a specific time frame. But not to look up events that happened ten years ago because that's not an event that's kind of a database at that point. Yeah I mean I have other comment to make because I think it's a bigger discussion. A lot bigger than what we have time for now but because we have all application rely on this. So the reason why we use event as a database is because I mean the typical example is the NFC. If you want to know the list of the token you own you can add and a lot do that. They have this further call to fetch all you know by providing the starting index and the length at which and you can do that. But it adds gas cost to the implementation and many decide actually to not do that and use the event. And I feel it's I mean I feel normal to do that and I feel we need to have a discussion about how do we deal with that. For applications that really want to remain decentralized. Have you looked at using the GraphQL APIs because I think you can go into a block from a log and then you can get the time stamp and you can do it in one step. GraphQL is not part of EIP 1193 which is the only thing I have access as an application. The GraphQL is there's a standard for the GraphQL and it is in the execution APIs. So Gath and Basu both implement it can expose it. I think what is important here is that I don't have access to a node. The only interface we have for application is EIP 1193. So I'm fine to have actually had a further EIP to solve this using GraphQL as a mechanism by which. Sorry to jump in here but just because this is not like core it does touch on the core. We're going to have a whole session about ERCs on Friday. It's an infrastructure issue. I agree with you there's a longer discussion about how applications use this stuff but I think that's probably a really good one to discuss on Friday. Perfect. Thank you. That will be at 1pm not sure where but it's on schedule somewhere. There's not a lot of stuff on schedule Friday. Matt did you have one? I'll try. I'll keep it short. I'll keep it short. So my name is Matt. I am an author of EIP 3074 auth and auth call. Mort's the mic. I thought she was going to take it away from me. You're finished now. Yeah so EIP 3074 adds to new op codes auth and auth call. The motivation of the EIP is to improve the user experience of Ethereum. I think if you're using dApps today you're realizing you're signing tons and tons of things when you're interacting with a single dApp and the flow that we have is not the best. And with auth and auth call we're providing a very generic framework for dApp developers to define like multi-transaction flows in a way that allows users to sign just a single message. And they don't need to use any kind of smart contract wallets. They get these types of benefits for free without deploying any smart contract wallets. That's one reason. Another reason that I think the EIP 3074 is very valuable is it lets all users of EOAs sign a message to create some sort of social recovery mechanism. And if they happen to lose their metamask or their ledger or whatever wallet that they're using they can go and recover it with the people that they signed through. And the third thing that I think is really interesting with 3074 and is a testament for like how powerful it is is a proposal that Alex came up with maybe last year about replacing the Weth ERC 20 token with a contract that uses the IP 3074 to natively move the Ether balances around whenever you're interacting with the ERC 20 token. So that's the pitch. There is some huge user experience risks with it is currently done and the revision took some of the guardrails off. So we don't have enough time to go into some of those issues with safety and those are I think my number one concern on that right now. But if we need metatransactions let's make a metatransaction format. And some of the other ones you know account abstraction. I think account abstraction can solve that and that's something we know we want to do. Yeah okay so yeah Marius wants to chime in they're on the same team we have 10 minutes left sorry no. Well I will give a shout out there is an account abstraction panel I think Matt you're on it later this week. So if you want to go ahead and do a heated debate about the various flavors of account abstraction and fake account abstraction in 3074. Well I have a whole hour to debate it. So that's okay cool can you raise your hand if you still had any IP. Alex you don't have one. Okay just. Oh no sorry I meant Alex B. In front of AXIC. No okay no okay cool. How about it. Wait wait wait. Did you have it. No no. How about did you have one. Okay Dano do you have a full IP. I have three of them but I only want a quick yay and a. Okay I also have a quick one for a yay and a. Okay Dano first and there we finished with Matt. So I just I just want to get a temperature check on the three other EOF ones on EOF functions static relative jumps and stack validation. Good idea bad idea too complex that's really all I'm looking for. Okay so the first one is EOF functions where we have the call F and we split it up in different functions and have multiple code segments. Good idea bad idea too complex. Okay static relative jumps where it's an immediate operation you say jump ahead 10. Yes okay and the stack validation which needs the functions where you can say that this function is going to take a few minutes. Five stack items and if it goes and it's not going to overflow so you could remove the overflow check. My thought on that is it's a bit complex to get into Shanghai so that's I want to see if I'm the only one of that opinion. I wasn't saying that the other stuff should go into Shanghai I think it's it might be a good idea in the future. Good idea just can Cooner later. So cool so no one actually proposed EOF so I guess. EOF is approved for inclusion but I just wanted a temperature check so let's not discuss it. These aren't in yet. Okay can I like like one comment is like if you combine functions and relative jumps you can get rid of all of job this analysis entirely because they kind of replace that. Unless we get rid of the jump code too. The jump what? Jump and jump I would have to get rid of those to get rid of jump test analysis. Yeah like remove all of these. We can do that with these two features but like the way we didn't champion it like because I think that's not on us to like actually say it's great because we need input from people that say they want to use that. But takes for mentioning and yeah we'll have even panel on Friday as well. I do have another IP. I didn't want to like talk about the UF because we spent like two hours on like protocol workshop. But this is really cool. It's called M copy for memory copying. It's not merged yet because of the IP process but I'm going to summarize it. So basically the only way to copy memory right now there are two ways. One way is to do it with a loop and store and load and store and that was really recognized and the identity pre-compile was introduced. I think the first like few months after the launch of Ethereum that was used by the solid compiler. But then with the Shanghai attacks it was reprised and the call was becoming too expensive so nobody used the identity pre-compile anymore. It is just there. I think Viper use it now but Solidity still uses the loop. So then the mem copy op code fixes all of this and I just trying to read the numbers. So yeah it takes like 800 gas to copy 256 bytes. With the Shanghai cost, with the recent cost it's 160. We download them towards 100 and with the IP it would be 25 bytes, 25 gas. We did some analysis. I think like 25% of all the memory copying would be improved by M copy. And there's actually one feature in the solid compiler which is kind of be... I mean it's not blocked by this but it's not implemented. Slicing of memory arrays and a lot of cases people are doing like forcefully using call data stuff because that can be sliced in the compiler. So having mem copy, a cheap mem copy would also improve Solidity as a language. That's it. Yes the reason I said that half EIP because it's almost certainly not for Shanghai but I don't think it's been talked about at all and it's nice to like get people's brains brewing on it. So first of all the one is based on this ERC 4537 which is so much the kind of abstraction. That's like a way of getting a kind of abstraction without requiring a hard fork to avoid all this like EIP process mess. And we found that people quite like this approach because they can already start using their smart contract vaults but we found that users actually still complain quite a bit about smart contract vaults because like they already have their money on EOAs and switching all their balances and all their NFTs and everything is just too much for them usually. So we were thinking quite a bit about okay like how can we develop a kind of abstraction more? How can we perhaps enshrine it a bit? And so some ideas just floating around again there's no EIP set and there's no like specific roadmap set. But an example is making a new transaction type which converts an EOA to a smart contract that you specify in the data field. And this basically should be quite a simple new transaction type. There's not really that much complexity as far as I can tell but please love to hear some comments. Some more advanced ones and again just ideation is perhaps making an EIP which converts all current EOA accounts into a sort of default proxy smart contract vault which uses the current ECDSA signature scheme that EOA has already used. And another sort of more advanced one is so this ERC4507 it works with a so-called entry point smart contract which is through which you route all your user operations to interact with your wallet. And this causes a lot of gas because you do all the signature verification all the stuff on chain using well EVM off codes. So instead you made this protocol that could be validated outside. And so we would say it uses a lot of gas. Any last comments? Oh yeah just real quick this isn't on that but it hasn't been suggested for Shanghai but prior to the merge it had a bit of support. Time aware base fee calculation it would essentially just make 1559 quite proof of stake friendly. 1559 is aware of blocks it's not aware of slots you could have say like an empty block with proof of work but now you can have missed block proposal so here we go. I think with the amount of missed slots we see right now I don't think it makes sense to do it now. It's like I don't know we're seeing like 0.01% of missed slots or something like this. It would be a negligible improvement to UX so yeah just for a clean up but not necessarily to have. So with the wallet I think there was three proposals. I just wanted to mention there was one proposal where you said that we could just auto convert everything. No that's not going to happen. Essentially that's already a huge issue for vercal trees where you just want to do an upgrade where the state just gets flipped over and it's a huge linear migration and we have absolutely no idea how we're going to do it for vercal trees so let's not do it twice. But what if it's not actually touching every account and rather it's there's no code and there's a message signed from that account and it's treated as a default account and so it falls back to some default code. But that would actually break the new semantics that we introduced with. Right. You know which I mean that no EOA can also have contract code. But there's no code in the account and so it's already empty and so it would basically be like me sending a transaction and rather than executing the same way that we do today it would realize that the recovered address has no code in it and so then it would just start executing it in an EVM frame with some default account code that implements the same concept of the ECDSA account. Okay I couldn't follow. So you wouldn't set the contract code? You would not set the contract code. It's a fallback. Okay I think we're going to wrap up. It's past six so first of all thanks everyone for coming. There's more places we can discuss all this this week so we mentioned there's an EVM panel where we can get into EOF 1153 all that good stuff. There's an account abstraction panel and we just had some new fresh account abstraction content as well and then finally there's an ERC kind of ETH magician session as well throughout the week. I don't know when they are sorry they're all on the agenda. The ERC session is on Friday at the workshop room four at 1 p.m. You heard it? Oh Proto. And so on Friday there's a session about tank sharding and proto tank sharding. If you're interested to help us with the EAP for right for far just please contact us and we are hosting co-work sessions. Cool yeah thank you so much everyone for coming.