 Yeah, I would like to talk about a proposal that is actually at least one year old. The first time I heard about it was at DevCon 4, and it's about replacing, well, not replacing actually, including more precompiles written in Wazen. So what is a precompile? Well, it's some kind of specific contract that is present in every client. And if you run your regular contract and you want to perform a task that is a bit both repetitive and needs to be performed a bit faster than the usual execution of an interpreter, you jump from your contract into this specific contract with an invasive code and then you return and you continue as if you have never left the EDM environment. So there are four main goals about this proposal. Three of them are ETH1 related, and the last one is ETH2 related. The first one of them is to speed up feature adoption. So when people want to add more features, it's the problem of every open source project, probably even other open source projects. Everybody wants to offload maintenance of their code to the open source community. It happens in the Linux kernel, it happens everywhere. So you want to do that, but of course there's a lot of pushback because the client maintainers are the ones who will have to do the work if something goes wrong. They are the ones staying up late to try to fix the problems. So there's some kind of pushback and that's pretty healthy actually. What you want to do is try to go against this phenomenon by making sure that when you create a new feature, you provide one template that you just dropped into your client. You don't have to rewrite it in every single language. So that's what the right once run everywhere joke was about. You heard that before, I'm sure. It doesn't work as well as in the marketing campaign, but it's still pretty useful if you can get closer to that. The goal is to reduce the work for client maintainers because if you have one bug, it's in the program, you can fix it once and for all. So it's everybody's participating. It's not like you have to fix it on every client. So hopefully it's going to help with that. The state growth is the biggest problem in Ethereum at the moment. Like the last test I made, I found that the state was 15 gigabytes. And that's not including all the overhead to access the data. It's just the pure state. You have big winners, like the guest of the Tokyo Kiris. They're huge. And if one contract is over 800 megabytes, you can't store that on a mobile phone. So that means already half the people on this planet are cut off from running a node. That's the problem if you want this community and this ecosystem to be accessible to everybody and you want everybody to be able to validate. So the end goal of this approach is to help with moving as much state as possible out of the chain itself. Just save enough that you can keep the guarantees of blockchain or the safe aspects of blockchain without getting the big problem of having to sink a huge amount of data. So this is also the question of gas cost. If you have a very complex operation, so actually I'm not really happy all of a sudden with the fact that I chose S-Tor because S-Tor is not a big problem. But you have one instruction in EPM that looks simple when you compile the program, but actually behind it you have a lot of execution. So the gas cost that has been chosen for S-Tor is somewhat arbitrary. And the idea is that if you manage to break it down into some instruction that are lower level, a closer match to what is actually executed, you can meter this contract, this new piece of code and hopefully it will be more accurate. And the last step is indeed to make it look a bit more like ETH2. So in ETH2 it's a lot of execution environment stuff. ETH2 is about getting, it's like line-based. So you only store the root of an execution environment, what's called an execution environment on the chain. And the rest is all done offline, sorry, off-chain, offline. There's one plan to take ETH1 and to bring it into one of the charts in ETH2. So the less data you have at some point when you fork, you want to manage as little data as possible. So this transition would really be much easier if the size of the data was reduced. So the requirements to perform this change, this switch, would be to have a wasm interpreter in each client. It's actually not that hard, wasm has been pretty trendy recently, so everybody wrote like an interpreter for their own language. There's actually multiple in-go, there's at least one in Rust, there's one in C++ of course. So you could do this, you could just just include it into your client. And then you're done, almost. There's of course a lot of details, like the details in the details. So there are a couple of challenges that I want to address, at least a couple of questions. One of them is, is that going to be slow if you're going to use pre-compiles? So if you're going to use wasm for pre-compiles. So yeah, there's no question, we don't want to replace current pre-compiles that are currently written in the native language of each client with wasm, that wouldn't make any sense. There's nothing preventing you from rewriting the code, once the pre-compile has been accepted, you can perfectly rewrite it into your own client's language. But the advantage here is that first you can include it immediately, so it will run. And then you can take your time to rewrite and exclude it. And then you also have a reference to comparative, so you can first use your implementation against the wasm or the wasm pre-compile and see if everything you did was correct. As it turns out, interpreter performance is not so bad. There was a talk about this yesterday, that you were in a team and we talked about it, so if you're interested, you should check the video for that. And yeah, there are plenty of tricks that we can still use to improve performance, so you can use them. You can compile the wasm, it's actually not going to provide so much of an improvement after all. But you can also use what's called the host function, so you have your wasm interpretation, but you call into an ATV implementation. One of the things that is slowing down execution is how U256 are implemented, so you could provide a service of host functions that are implemented natively. And there's a proposal by Paul from the wasm team to provide some kind of assembly register, so you provide some indication of how the compiler should compile your code so that it produces an output that makes sense in the context. Yeah, so the other risk is the pre-copilot explosion. Like I was saying at the beginning, you have a lot of people that just want to upload their code to the community to have the community maintain. It happens everywhere, it happens at Phoenix Kernel. And yeah, thankfully there's a lot of healthy push-backs from the client maintainers because we don't want to, yeah, we want to make sure that this thing is actually useful. And so there's a couple, at least I proposed a couple of acceptance criteria that are just common sense really. If you want pre-compiled to be added to the list of pre-compiled, well, you have to provide tests, obviously. You have to demonstrate the willingness to keep maintaining the pre-compiled once it's integrated because once again we don't want to be the only ones staying up going to fix the function problem. And there's the last two things that are a bit more imprecise, but the focus would be to, like I said, focus on state management that we don't want to pre-compile for everything. It would be with a focus on taking as much data off-chain as possible. And of course, it's not about just getting your code, of course. We have to understand that when you have a pre-compiled running, usually the gas cost is going to be cheaper than if you were actually running it, like running the EVM version of it. So effectively the network is subsidizing the pre-compiled. It's not about getting everybody to run a cheaper version of their code. It's really, it has to be, like, there has to be a proof case that pre-compiled serves more than one purpose, one application. Okay, I'm beginning a bit late, so I'm going to skip over this one. But, yeah, basically, why not EVM? Well, the tooling, and it's closer to be stupid. That's the short thing. So, I wanted to cover a couple examples that are in the works. So, yeah, how would you implement such a use case, and what would you need to put the parts? So, yeah, in each do, you have what's called a relayer. So, instead of sending your transaction directly to a validator equivalent of a liner, what you do is you send them to some kind of full node, and the full node takes all your transaction, like gathers all the transactions, pack them into, let's say, a mega transaction, or at least it summarizes them, and then sends them to the validator or the miner in that diagram. And that gets included in the block. So, what you would do is to create a contract whose state is only three things, the root of the account tree, the total token supply that is not necessary, but it's better. For example, when you go to EtherScan, they tell you you have this token, this is the total supply, you don't want to be unpacking a tree, or you don't want to necessarily be running your entire a full node for each token. So, just having an access to the supply would be useful. And of course, there's a nonth to make sure that you don't replay it as and things like that. And you use the pre-compile to check the tree. So, let me show you actually. Oh, right, I have the first slide to compare the difference. And I realize there's a big mistake in that slide. So, here it should be 64, not 32. So, normally if you write your ERC 20 token, you've got like a map between your address, which is 32 bytes, and the value, which is also U256. Here, and I also forgot the nonth. So, if you compare it to the stateless version, you just have the tree root, you have the total supply, you have the nonth, and so that's 96 bytes against 32 times the number of entries. So, 64 times the number of entries. So, if you have a transaction format, you just give the proof of the root tree before and after execution. You give the address, and of the sender and the receiver, you give how much token you want to transfer. And of course, you sign the thing. And the overall algorithm in the contract is you just impact the transaction. I hope you can read it back. You check the signature of the transaction. Then you validate that the tree, that the root you are given is the correct one. You validate that the new root is the correct one also. You check the balance and you update the new root in the contract state. And the three blocks in green are actually, can be performed by pre-compiled. The signature check is definitely one that is already present. So, you can factor this thing and if you do that, you can support a full, like a host of year C20. So, there's clearly a new space for something repetitive. So, yeah, the advantage is that you need less storage space. You don't need to refrite the year C20 code every time. There's a small optimization that you can have. If there's that difference between what you send and what you receive, that's taken as a reward to the relayer. So, this is how you would incentivize relayers to do this work for you. The problem, of course, is you have currently a lot of states. That needs to be clarified. How do you extract the state and more importantly, how do you extract the funds? Like if you look at crypto kitty, you could always burn your kitties somehow, but to agree to a different model. But the money that is stored in that contract, that is a problem that is still open and needs to be fixed. Yeah, so then I don't have a better name so far, so I call that e-ception. You can enhance the model I provided before with the possibility to add storage and contract in your off-chain tree. The thing is, it's going to be a bit more expensive complex to do the validation, but that also provides you the ability to change the storage model. So right now the state is stored in a big tree, like a miracle patricio tree. What if you want to use a linear storage? Maybe it makes sense, maybe it's going to make things faster. So that's a nice way to upgrade the storage model without changing much in the main chain. But yeah, once again validation is going to make things a bit more costly and complicated. So it's really interesting if execution rarely happens. Yeah, so that was an illustration of what I was talking about. You can have your off-chain storage, sorry, two bytes. Then you keep the parts you're interested, the parts of the tree you're really interested, so you can prove that you knew the pre-route. Then you have your own storage, so you can just keep the parts that you need. And if you want, so it's the same thing, except you can change the storage model, you just get a simple list of key values. Yeah, so then out of this, I had a conversation with Yuichi Hinari, who's unfortunately not in a second, but yeah, curious to him. He came up with an idea, he came up with an idea to rewrite Radar in a way that you just create reimbursement contracts. You also, because it's contracts, you get state channel support, so it's not just tokens, but also more complex execution out of the box. And so when you want to open a channel, you create those two contracts in a tree that is kept offline. And all you need to do is, well, you sign the proof that the other transaction, sorry, the other update to the tree is correct. And in the end, when you want to close the channel, you just send the proof with the update, the execution happens only once. And yeah, so you basically have Radar without actually using too much storage. Yeah, so I'm going to finish on this. The mandatory test net announce, of course, I would like to try to run a test net before the end of the year with an initial list of recompiles. By the end of next, sorry, to start next year, I would like to have a fully specified year C20, a stateless version of year C20. And hopefully by May next year, so in March next year, there will be some proper discussions about this on all core dev calls. That would be nice. Yes, so that was pretty much it in the nutshell. Please, if you're interested in that topic, if you're interested to help, if you want to discuss, if you just want to bring it here as well, contact me. I'm available here. This is my guide from the github slash githab endo. And yeah, thank you for your time.