 Hello, hello. Welcome to the day two of the 1.x meetings. So what we're going to do now is that we have about four different presentations. First will be Frederick, then there will be Casey, then there will be Remco. Then I will present, after that I will present very shortly the things that I've changed since yesterday, and after that we're going to have a breakout session. Okay, and then further in that we'll have individual time and more presentations. Good morning. So we had some good discussion on chain pruning in the relatively small group yesterday. I can't say that we got to any sort of implementation details or figured out exactly what to do or what an EAP or anything should look like, but we had a lot of discussion around what we need more feedback on and what we need to figure out before we can write an EAP, and a lot of this comes down to like community feedback and actually figuring out what users want and need. So I think the first point that we need to establish is what do people actually want? When we're talking about a pruned node, we can talk about it in different contexts, but essentially imagine it's a node that just has a full copy of the states and headers and nothing else. Who wants this? What is this for? You can imagine I use case like for myself where it's someone that wants to use dApps and use dApps regularly. They don't necessarily want to use a like client and pay the network cost of going out over the network for every request to this dApp. So they want the full copy of the states, but don't have 150 gigs free on their laptop. So it's sort of a thing in between a like client and a full node. And I can imagine that user and like I would be one of them myself, but how many of those users exists in the ecosystem? And it's really hard to tell. We've already pushed people to accept centralization and just use infura. Everything is fine. And can we really take this step back and bring those users back to care about data validity and other things like that? And it is something that I don't have the answer to. And I think we need more people saying that they actually want this before we should just dive head first into doing stuff. The second question is why run this over a like client? Do people care about those network costs? How hard is it? Maybe we should focus instead of focusing on building a prune node, we should focus on like client incentivization actually making like client the like client experience of first class experience that works really well. If we solve the incentivization problem that's also useful for you to and something that can further like both the current ecosystem and future. So that's another question of like where is the effort best spends given that we have limited resources? And the final question just here is this is almost a question to Hudson, who's not here right now, but the question is like how do we find out the answers to these questions? I don't really know. We need to like engage the community to find out what they actually want, what they actually need, what they would use. But if we assume that this is something desired, how do we build an MVP to prove these assumptions? So I think the first thing that is really easily implemented like I said yesterday is something that we can have like next week if we go on into is a feature flag on CLI to just say prune everything beyond end blocks. And I think there's a discussion here around if this behavior is default versus non default. And in Peter's proposal, I think he has written up most of his like write up an assumptions and sort of how this should work with the expectation that it's the default behavior. And I think going into default behavior is really dangerous and something we shouldn't do as an MVP. If we just ship a feature flag that says, hey, run this in prune mode, then and have it non default, then we can sort of gauge, you know, how many people are actually using this? Is it something that's solving a real problem? And we can talk about, you know, bringing this into a default behavior if we can see that it's necessary and have figure out this the solutions to the availability problems. With that comes then the question of like, do we actually need to ensure availability? So a lot of Peter's post goes into how do we ensure availability? You know, as soon like Bitcoin has this mode as an optional thing that you can run and they don't seem to have availability problems. I think it's relatively safe to assume that we won't have availability problems if this is just an optional non default feature that we ship. I think there's also much less of a coordination burden between clients if it's the non default behavior and a sort of quote unquote insignificant amount of users using it. But if a significant part of the network starts using it, then coordination becomes more important and we actually need to figure out like how do clients sync together and things like that. So I think my proposal here would be that we actually ship a pruned mode feature flag and just see if this is something that people want and people use. The last thing that we discussed was how do we like what is the best way to ensure availability into the future. Peter's sort of top suggestion was to put blocks and IPFS comes with a couple of additional problems like we're not actually storing IPFS hashes in the header. So to be able to look up blocks, we'd have to either introduce another hash or have some sort of gateway service that converts one hash to the other and it can be somewhat complicated to figure out. We can add another hash to the header either in protocol or out of protocol depending on implementation. And it's not that big of a deal but it's a little bit dirty. And yeah, it requires cross client coordination. So if we go with IPFS, we need to make sure that all clients have IPFS capability to be able to read from IPFS, etc. That adds another level of overhead. So I think something that we talked a little bit about was what level of availability is required, what's high probability of availability, and what incentive structures can we build around having availability. But I didn't hear any really great suggestions on actual incentive structures. So if anyone has ideas, still open to that. And finally, I think something we talked quite a lot about and especially related to the discussion of IPFS and where do we store things. There is this original proposal that Robert Parity made a couple years ago that we just shard up the history by identity key. And you can do that in an arbitrary way, but we currently don't have any discovery method of finding nodes that store this section of blocks. So we're sort of dependent on the discovery v5, which development of is not that active or fast. And so we need v5 to be in place and sort of deployed widely across nodes before we can use a mechanism that is inside the Ethereum network to distribute these blocks. But I think everyone in the circle sort of agree that this would be the best scenario because we keep everything in the Ethereum network where don't have external dependencies on things like IPFS. And it paves an easier path for sure for proper incentivization in the future. So yeah, there's an open question there of like how do we get discovery v5 shipped as fast as possible. But I don't think we need that to start experimenting. And I think we can build our MVP and have a feature flag that enables a pruned mode and basically say if a lot of people want this, a lot of people use this, then we can look into, you know, how do we ship discovery v5 faster or whatever else that's necessary to get higher probability of availability on the network. So those roughly what we discussed yesterday and I don't really think that we can have much more like productive discussion with just the people here today. My hope says that everyone kind of goes away and thinks about this a little bit. And especially we try to reach out to the community, try to see what people want and is this indeed something that is desired and people are asking for, then we can pretty easily build a prototype to serve that need. That's about it. Yeah, so discovery v5 is, I don't know exactly everything that's proposed in, so we're currently on the v4, like version four of the DevP2P discovery protocol. And so this is the next version of DevP2P discovery. It includes one important thing called node records, which is just a fixed length string that becomes available, like it has arbitrary content. And it becomes available in the DHT. So you can, once you've downloaded the DHT, you know which nodes have which section of the blocks. So you don't actually need to go around the network pinging everyone to see who has what. So question is, what would an incentive structure look like? I don't have any great suggestions. So basically, I mean, that the core premise is you get paid for storing a section of blocks or even you get paid for storing all of them. So if you're a full node and you can prove that you have all of the blocks available, you get paid somehow. And this sort of goes back to proof of storage and proof of like retrievability and proof of availability, which is like a lot of the stuff that Filecoin is working on. And we could probably lend some stuff from them, but borrow some stuff from them. Yeah, I agree. I think in Peter's proposal, there were some comments. There were some from Vitalik of like why why wouldn't we use this? And Peter's response is basically, well, they're not production ready. And yeah, I think the argument is like IPFS is something that exists now and we can put in next week if we wanted to. Like there's great bindings for go and there's like we have already have a bunch of stuff in Rust for the period client so we can get it done really quickly. And making so on production ready is a longer term project that we don't really have any estimates of how much effort that would actually take. But I would argue that we don't need to solve this problem right now. We can still introduce a prune mode and then we start working on that problem and solve it properly. Yeah. No, programming wise, we could do it quickly, but socially, I'm not sure. Yeah, it has so it was considered as well. It just has some additional problems, but we could come up with a standard for it relatively easily. But currently parodying get that different serialization. Well, I mean, there's always RLP. So could always do that. But I think so. Question is, can you incentivize both disk and bandwidth usage? And I think you can. No, but I've like Swarm and Filecoin and a couple others are working on incentivize storage. And that's a hard enough problem. I haven't actually even seen one anyone trying to approach incentivizing bandwidth. It's it's a lot harder. Yeah, it's maybe. Yeah. Yeah. So in in a like client's incentivization scheme, you would basically charge per requests and account for bandwidth in that request. Yeah. Okay. So the idea, can you hear me? Okay. With road mapping in reverse. Um, we know what we want to achieve. We know what the final product is, which is compiler engines for both the major clients, get them parody and preferably the engine for go would be written in pure go is the go team likes the code base, likes their code base to be all in pure go, not pulling in dependencies written in C plus class or Java or various languages pure go. And that would be ideal. So that's the ideal situation. What we have now is we have interpreters and every client is not very hard to write interpreter. You know, so one member of team Paul wrote the interpreter in Python for go. We're using an interpreter called wagon. Interpreters are easy, but compiler engine prototypes, there's the serious ones come from the browsers. So in Chrome, there's it's written in C plus plus. So there's a decent engine C plus plus. There's also good efforts for engines in rust because Mozilla is trying to rewrite basically Firefox in rust. So for parody, integrating the WebAssembly WebAssembly engine in rust is it's there's a lot of progress towards that. Yeah, I was joking this morning that if if Google was decided to rewrite Chrome and go, then it would probably solve all of our problems because we would have a decent WebAssembly engine written in pure go. But they don't seem to be doing that. So without having a good compiler engine in in go, then one workaround we could try is compiling the pre compiles using just a head of time compiler, whichever one works the best, and then deliver the binaries, you know, the .exe's to to the clients so that, you know, Gath would import an executable. They might not like this. Well, we already know they're gonna hate it. But this is one one approach short short of having a full engine, full compiler engine written in go. Maybe if this is viable, you know, Gath team could fight the bullet and we'd be able to have WebAssembly pre compiles before a WebAssembly engine is available. Short of that, we can use an interpreter. We can just start out with an interpreter. This was a proposal from alexi where rather than trying to jump straight to having awesome compiler engines, if we just start with integrating an interpreter engine, then other teams will be incentivized because they'll know, okay, well, WebAssembly is definitely an Ethereum. So if we work on a compiler engine for Ethereum, then, you know, we know it'll actually be used and adopted on, you know, on the main net or the 2.0, whichever. Without having even an interpreter based engine inside Ethereum clients, it's a big risk to start working on a compiler engine that, you know, may or may not ever get into Ethereum. So the downside is this, using a WebAssembly interpreter for pre compiles, it would only work for a few pre compiles. A lot of some pre compiles are prohibitively expensive, prohibitively slow when ran inside an interpreter. So like the snark pairing pre compiles are way too slow in an interpreter. Hash functions might be fast enough. Hopefully they'll be fast enough when ran inside an interpreter because, you know, it's kind of dependent on the input size. So for small input sizes, the pre compiles might be usable inside an interpreter engine. So even another workaround, even less ambitious is to simply use WebAssembly as a blueprint for pre compiles. So we'd just be analyzing this WebAssembly blob to generate a gas rule. And this way the gas rule is simple enough to be implemented natively. This is basically how the existing pre compiles are done. This, it doesn't, this doesn't help get introduced WebAssembly into, into the clients very much at all. It makes it maybe slightly easier to add new pre compiles. But still like one of our estimates is maybe if we only do this, then adding pre compiles, the, according to the way, the old way, we can maybe do like two new pre compiles in 2019 when there's probably demand for like five or seven new pre compiles that people want. But doing it this way is a lot of work for each one. So yeah, that's, I mean, that's it. These are three approaches that we're kind of working on all three at the same time, trying to prove which ones are viable and hopefully get us to the final goal. But, you know, eventually, yes. Well, we can probably, I mean, using an interpreter and, and a blueprint. But I mean, we could specify the, the changes for, for, you know, a bunch of pre compiles, but we're not confident that the testing that we could test them all thoroughly and that clients could implement them by, you know, say the end of 2019 only, only the interpreter using as, as the, as the engine. I mean, you can probably specify this by, by May, but it's not going to be usable for all the pre compiles that people want. So be patient. Yeah, I mean, we'll see if client maintainers want, you know, agree that it's a realistic time, timeline. Yeah, we're pretty close with the concepts as it is. Yeah. Yesterday, we talked a bit about the gas costs and how difficult it is to make the gas cost. All right, let's try that. So, so my question was yesterday we talked about how difficult it is to estimate gas and we identified a couple of scenarios. One is that it's constant gas. The other one is that you can cap it at an upper level. And the final one is that it's an undecidable problem depending on stuff. So my question is really, is there any pre-compiles that we can start with which, which, which are easy to do according to this increasing ladder of difficulty? You guys want to jump in here? Yeah. Things that are constant runtime. For example, if you know your elliptic curve, you know, multiplying to whatever points or whatever you're doing is constant runtime, then that's the gas rule. You charge them constant gas. So someone, some are have structure like hashing. They do it in blocks. So they, they, your input data is sort of arbitrary length, but they hash it in chunks and they, you know, they do, you know, digest, digest, digest and then finalize. So that has sort of a simple structure. So yes, the pre-compiles we're looking at have the structure that we can analyze. Arbitrary code, hopeless. As you know, the halting problem. But as you mentioned, but yes, for what we're doing, we think we can have reasonable gas rules for the pre-compiles that people are interested in. I don't know. One more point. I just wanted to say, Casey said that the interpreters might be too slow for certain things like pairings. I think that might not be the case. I think we need more benchmarks for that kind of stuff. Yeah, just to re, repeat the, the upper bound gas rules don't help with that that's unrelated to pre-compiles that are simple enough to run inside the interpreter. If we could maybe do two new pre-compiles that are too complex to run fast inside the interpreter. And for those, we, yeah, we want to use the WebAssembly as a blueprint to generate the gas rules, but still doing more than two seems unrealistic with all the testing and, and implementation that clients would require to add those pre-compiles. Okay, so we are just, my previous question was about, so I just looking at the objectives, because I'm determined to kind of keep people on objectives today. So we established the sort of vaguely the initial set of changes for the May 2019. So the first, the second objective is establish framework for designing, evaluating and comparing the, the, the sort of the change proposals. So the question to you and to your, your team is that if somebody wants to sort of propose something for your, for your, you know, for eBasm introduction. So, so what are those going to be? Maybe you could write it down for something like this. So what are the things that they need to consider so that they don't have to, they don't throw like really unthought, unthought through the proposals at you. So what kind of, what are the steps that need to go through to, to describe, like, do you actually want people to come up with alternative things and sort of the framework that I'm talking about the framework. So, so what are the questions people need to answer? What are the things that they need to consider before they come to you and says, Hey, I've got the, I've got an idea. I've got an alternative proposal to what you're doing. Maybe something like that. I know it's might be way, I think the biggest one that people take for granted is client code bases like to keep their code bases pure. So saying, Well, why don't, why doesn't every client just use this, you know, C plus plus engine or something. So to the, so you might remember yesterday, I put up like a four questions, right, to you. So if we were to keep those questions and obviously added a third thing to the, so then another consideration you would add to that is the, is the client code. And they are sort of some of them being opinionated about what should go in and not. What are other things that you would, you would say are important? Um, off the top of my head, I'm nothing really comes to mind. Yeah. So if you have any ideas through today or tomorrow to add to the, because I think I'm going to start with the, the questions that I asked yesterday, I'm going to add what you're done today, and then we can add some more things about all the considerations that people have to think about if they want to, you know, to participate and give your alternative proposals. Okay, cool. I just wanted to add some comments on specifically this and then some of what you said about clients and, and code bases as well. So something we have to keep in mind with pre-compiles is that there are some pre-compiles that will never be anything but x86 assembly. Like it's just anything else is too slow. So it doesn't matter if it's wasn't compiled to assembly, like that's going to be too slow. So there are, there's like a certain class of pre-compiles that are assembly only that are like who hyper, you know, specific, optimized assembly code. Then there's like a second class that I think this fits really well for where it needs to be fast, but not extremely fast, where it needs to be compiled as a web assembly. And then there's a third class where it just needs to be faster than EVM. And there, if it's a really good interpreter and an interpreted web assembly might actually be faster than EVM because we have 32 bit math instead of 256 bit math. So if you're doing some sort of like relatively simple crypto, then a web assembly might like even if it's interpreted might actually be faster. Pairing is the first one. It's assembly. Yeah. So you think that the pairing whatever we do here, the pairing have to be in assembly, right, to be efficient enough. Yeah, I mean, for the ones that we already have, yes. But when like when we're adding new ones, you know, it will definitely be faster than EVM version. So it's sort of what about like if we if we think about this is the G minus two, but did you start with G is this the so if we think about G, right, was it G? Yeah, so if we have a very good, yeah, if we have like a compiler, maybe a head of time compiler or something, which can compile, let's say pairing code written in a web assembly into machine code, do you think it's going to be fast enough or not? Or does it have to be handcrafted assembly depends on what you mean by fast enough, but like, it's fast enough, I think. So maybe then done. What did you want to say something about it? Whereas the the WebAssembly version that we benchmarked comes up on which is maybe twice as fast as the native Fairview one, which is like 15 seconds. But, you know, whether we still haven't, the gas costs for that pairing are still calibrated to you know, like the 50 to 100 ms when we're trying to reduce them before but without a native Rust assembly, you know, what would be the implementation? Just a quick question. Does the pairing require a Kerala's multiply? And does WebAssembly actually have this instruction in the first place? Kerala's multiply is something you need for. Yeah, exactly. Exactly. So my question here is that, okay, so the do we need pairings to be super fast? Or can, is there use for pairings which are not as fast? Because we might maybe we, we don't need to have them 100 in assembly. Like, are they, can they still be useful enough? If we just do they're really useful as they are right now. Like that, that's why people are pushing to get like the parody implementation of 5x faster because the gas costs of pairings right now is no, no, this is a different thing. So there's a gas cost and there's the actual performance. Yeah, but the gas costs currently mirrors the actual performance. So to reduce the gas cost, we actually need to improve the performance. Okay, I would still, I would still like to separate these things because if, you know, if we can increase the gas limit or something to make those things useful. So then maybe it's, it's even, even because what it looks like what you're saying is that some of, for some of the pre compiles, the whole approach is probably kind of flawed, right? The whole approach to trying to, to do the pre compiles in evasem is flawed. I actually, before we sort of flow, throw the white flag on this, I want to say, I want to ask if, even if it's slower than hand crafted assembly, like by what mark, but what, what, by what factor, like 10 or so, could it still be useful for the real, real stuff or not? There are applications where you would need to do pairings infrequently. But I think the issue is if you price the gas costs relative to the worst performance, you're discouraging developers from using it. So it's, it's, it's probably not the best approach to take in the short term, just because it's not going to encourage use. But, but actually just following up on that. Maybe I'm misunderstanding if the, so all the clients already have an optimized pairing implementation, what is hard about calling that from, from the wasm interpreter? Not much. It's doable. It's just the whole point of the wasm thing is to remove the pre compiles or like in this scenario. So it would, you gain nothing. But as I think there is so we will always have this, like the super optimized, like pairing stuff, I don't think it's going like, like it will discourage use like even currently, it's uses limited because it's too expensive or like too slow. And I think that will remain the case for those. But there's a whole sea of other pre compiles that people want to add, like they want to add different hash functions, they want to add a ton of different stuff that is not that doesn't have to be hyper optimized crypto code. So there are definitely other types of pre compiles that don't need to be hand coded assembly. Yeah, so the just to clarify, so we, the one of the reasons why, why the wasm was included into the why we're trying to get it earlier than Ethereum 2.0 is because we see it as a meta feature. Whereas instead of develop core, like a client develop developers working on specific pre compiles, and it's really a lot of work. We basically deliver one engine which can implement any pre compiles. So that's like a, it's like optimization of our time, sort of instead of try to optimize things which might be used by, by the few contracts, we actually just got the whole class, solve the whole class of problems, probably not the most optimal way, but like a, which kind of we, we're going for the bread. This is the one pairing function we have now, it's you know, the BN curve. And then as soon as that was implemented and adopted, you know, by the time it was adopted, Zcash was already switching to, you know, the new curve. So anybody else have a question for now? I think Frederick made the best point where there's some class of problems that are reasonable for interpreters. There's some class of problems that are not reasonable for interpreters. There's some class that we can't even use with Ethereum now because we would need FPGAs or ASICs for. So there's sort of classes of problems. The interpreters give us access to some sort of problems like hashing, like blakes and certain things, but perhaps not other things. But this is sort of a process as we go, as we, as we improve, as we shift our gas prices from interpreters to compilers when our compiler infrastructure improves. We have to take a first step. This isn't going to just magically appear. Our compilers are, you know, audited. Everything isn't going to just appear. We have to take a first step. And that's the, that's the reason we want to start with interpreters to tell people, hey, look, you know, we have a need for compilers. We need to audit them. We need to, you know, we want them to perform. We want guarantees from them. Without doing this interpreter step, maybe there's no incentive to, to, to do the compiler. So I think that's the benefit of doing it. Yeah, I completely agree with that. And it's just a matter of time then. Yeah. Why is this not working? Found it. So I just want to have a brief meta discussion. So me and you and Greg are here from the perspective of DAP developers, which, which means that we're not as much in the know of the real problems that the core developers are having. But I think it's good to just step back a little bit from the specific implementations and look at the higher level problem that we're trying to solve here. Because I don't think there is a lot of clarity right now about what specifically is the, is the problem. We have a clear goal. We want to keep Ethereum 1x running. But one of the things I learned yesterday is that for sure, Ethereum pro performance is really, really complex. And for example, something like storage has nothing to do with actual on disk storage, but has the complex things. Them suddenly become more expensive. So it's a very crude signal that we have. And any sort of discrepancy between this gas cost and the actual resource consumption of the system that could, in theory, grow out of like without bounds over time will inevitably become a huge performance problem in the system. So we need to make sure that whatever the actual cost of something is and the gas cost of something is, it needs to stay within some finite bound of each other. Otherwise, this will inevitably lead to problems. So now that we sort of defined the problem space, the other thing we need to do is define the solution space. So what are our options? What are we willing to do? What are we not willing to do? Where are the trade-offs? How much pain are we willing to suffer in order to solve certain things? How urgent are things? So one thing I realized is that at least in some respects, nodes are not fully optimized yet. Probably in terms of the peer-to-peer network, there is a lot of optimizations that can still happen to avoid bandwidth consumption. So it might not be necessary to break the protocol in order to reduce bandwidth consumption because there are lower impact changes we can do in the wire format first. And we need to be a little bit careful here when we start breaking the protocol for things that are not heavily optimized yet in the nodes because we might end up breaking, making changes in the protocol that we need to carry forwards toward in the future that might end up actually not being necessary as we start developing the nodes more. So this is something I'm a little bit worried about. Fortunately, it seems that nodes in many critical areas are already heavily optimized and where they aren't. We should be able to estimate how much of an improvement we can expect from future development here and how much the impact of certain protocol changes would be here. Another goal that we have is to remain implementation agnostic. We kind of want to give all the nodes the freedom to implement things however they see fit. But this is directly at odds with having accurate gas costs and solving concrete performance problems because they are always dictated by a particular style of solution. So these two points in the solution space are actually a trade-off where we need to collectively agree on where we want to be between one and the other. Another question is what do we want to break in the protocol? So the protocol has a lot of things it has historically guaranteed by virtue of the yellow paper and it is obvious now that we're going to have a series of hard forks that are going to break some of the things that were specified in the yellow paper and it's entirely clear where do we draw the the lines here what what the boundaries of this are. What for example it seems that we are completely fine with destroying gas token like the storage proposal and some of the other changes are going to utterly destroy the token and don't think anyone has a serious problem with this because the assumption is that gas token is based on something you can't rely on. However we were kind of shocked when the assumption that you cannot do a storage for 2300 gas was broken and started breaking and it cost a big pennant. So this was an assumption between core developer resources and urgency. So this sort of defines the solution space and again I'll like to give the DAP developers perspective here. The main thing is how do I as a DAP developer know which things I can reasonably rely upon to be there for the future and which things I should consider. Yes this is what the protocol does now but this is likely to break soon because it was not meant to last forever. This this is not like we specified the current behavior really nicely in the yellow paper but what is not clear in the yellow paper is how this will evolve going forward and it would be nice to add this. Now big shops like let's say maker 0x you name it. Honestly break whatever you need to break. We can migrate. We have designed our systems to be upgradeable. We have developer resources. We have the skills in house to deeply understand the changes and if you just give us at least a path to a migration we will be able to find it implement it and take it. So don't worry too much about about us. What you should be worried about I presume is the medium scale developers and the individuals because they might not have the resources to implement migration parts or they might have locked themselves in the position where a migration part is really hard. Let's say for example that I'm an individual who thought it was fun to become an absolute 0x hodler and I locked all my 0x tokens into a contract that locks them for three years and now storage rent comes and this contract gets slowly drained what happens to my tokens is to a migration path here. So that's the sort of thing that I would assume is a bit more challenging. So yeah my key takeaways here are we need to define what our performance targets are for the system in order to define what the problem is that we're trying to solve and we should be documenting any invariance that the system currently gives and to what extent that developer should be depending on them. Thank you. Anybody has questions to Remco? Cool. Thank you very much. I'm going to just do so you probably saw this yesterday but I'm just going to highlight what I've changed since yesterday in my slide so it shouldn't take a long time based on the feedback that I had received including this one as well. So basically hello. So you probably remember I showed this yesterday so my kind of premise was that we had four main performance degradations that would be caused by the growing state size and so it's numbered one two three four but actually more than an hour for example which it could do then by the time the sink is complete the peers would have pruned the state that the new peer wants to sink and that clearly is getting worse with the sort of with the with increasing the state size. So what we want to do is that we want to do this emulation which to determine what that function is so we kind of think that this function has three arguments state size bound with and pruning threshold and so what is the success rate in this case and maybe we could try to see investigate the dependency there and so what happens when you when your state size start improving and so the next thing what we could do is that for other things for the number one two three although we have an idea of what the function is but we don't know the coefficients so for example you probably intuitively can think that the just simply reading the state is cheaper than sealing the block when you modify the state but how much cheaper is that so we again using some emulations we can we can put some really rough coefficients on these functions like and after that so once we've done it so we have a very rough sort of functions with the with the relative coefficients for each of this performance degradation and then after that we use we use simulation already when we have functions and then we try to simulate the events that we kind of want to predict when we're going to be hit by this problems and the problems could be new guest nodes mostly unable to do some snapshot sync or new priority nodes because they have a different syncing mechanism they probably will have who experienced this problem in different times at the different sides of the state so then another thing to look for is that when is the block processing going to be 10% of the interblock time when is it going to take half of the interblock time because all these things will start affecting more and more facets of the of the performance so that's kind of the the the plan so next thing I'm going to show you is is the additions that I've done to the to the state rent since yesterday okay so just scroll to the pieces have changed yes so here remember yesterday I was trying to explain you the low cups with the with this analogy so I think Andrea came up with a different analogy which I didn't put in the pictures but yesterday I used analogy where the contract is like a glass separated into the sections here we've got the six sections so basically this contract has six storage items and two of them are currently have lockups on them so the idea is that after the introduction of lockups all the contracts all the new contracts will always be full so in this example where you created the contract it's got no storage and then you have a S store which sets some values and it keeps growing and then you see that at some point when you simply change the value the storage size doesn't change and there's no lockups necessary so you could see that takes origin doesn't need to pay anything but when the subsequently the takes origin when subsequently you free element of the storage the excess lookup gets returned to the takes origin and in the case where we have a pre-existing contract so again this picture demonstrates that we start doing some changes which could be either adding a new item which needs to be needs to have a lockup with it or even if we just modify the item in the second transition we still have to put it put up the lockup although the the storage size doesn't change and then in the then we do another modification and then eventually we'll nullify so so in the second so in the last transition we reduce the storage size but we don't get anything back so a good question oh yeah yeah sure Fredric can you give him a phone with the microphone I wonder why are you deducting balance from TX origin instead of trying to use some opcode for that what so I mean new opcode yeah well the new opcode which will be it will be like some opcode that increases the lockups like pay rent and if the lockups are not enough the store will fail yeah the reason for that is because the reason the whole reason why the lockups were introduced is to provide the existing contract with the migration path so and of course if you need if you need to put a new opcode in it means that you can have to rewrite the contract so the we have no choice if you want to keep the existing contract for a while while they're migrating we have no choice than to modify the semantics of the current things and actually something came up this morning which I didn't put into the presentation yet we had this little chat and there's I want to explore a bit explain a bit more why do we need this migration path and then shortly the explanation is that if there are multiple contracts which have a synergetic relationship then like let's say three of them using each other then these three contract will have to upgrade kind of stepwise so the first one so the first contract rewrites its code but it keeps migrating the clients but in the meantime another contract which uses it has to use the old version and then they say okay we've done migrating now the contract number two can switch to the new version and then start migrating itself and so forth so this could be like a whole chain of migrations and we need that's why we need this mechanism to keep them alive while they're migrating because it cannot happen in one big bang cool thanks just a yeah let's discuss later but the explanation may not be enough for those kinds of situations okay so what else i changed here since last time okay so here is a another discussion we had how big should the lockup price be and at the moment one of the methods which we're thinking about is that we think about target storage size we could discover using simulations or there's something that I just mentioned at what sort of storage size we're going to everything is going to stop working then we also have some sort of target lockup so how much easier do we want to be locked up to completely reach that target storage size and we just divide one on another so if we for example take 500 million items which is about three times more than we have at the moment and which we say okay in order to completely filled us up we will we will require 10 million easter to be locked up then we arrive at the lockup price of 0.02 east per item which is quite expensive I would say and so what else did I add so I fixed this diagram from yesterday so in this diagram yesterday I had the potential modification to rent balance and balance but now in here I cleaned it up so it's now should be clear that the during the eviction check if the eviction check is not not successful then nothing gets modified so and another thing okay so here I clarified that the pay rent opcode actually has two arguments which is the target contract and the amount which means that you you can call it from the third party contract and to to top up somebody else's balance rent balance and the lastly again we had a discussion yesterday what if we get the constants wrong so in this proposal there are four four different constants that have to be introduced which is the account rent code rent storage rent for different aspects of this rent as well as the lockup price so if we realize that we have underpriced to overpriced rent then we could probably just change it via hard fork and that don't at the moment I don't see any major problems with that where but the lockup price is a bit different because we we currently have one simple counter that account accounts how many lockups do we have so if we decide to change the lockup price with a via hard fork what we will have to do we have to introduce the second counter into the each contract called lockup two and in this case the condition of like remember there was a condition that which determines whether the contract is full which means that it doesn't have to pay a storage rent so this condition would change from the simply storage size equals to lockups to something a bit more complicated but still manageable which includes two prices the old one and the new one which is basically say that we were tracking both of these things and if we need to do the third hard fork then of course we can make this formula a bit more complicated so but the basically yes so I say that this is the trade-off so obviously introducing the variable lockup is will eliminate this problem but it has other the the the variable lockup introduces even more complexity into this mechanism okay so that's it from or so these are the updates from yesterday that I had um any more presentations for now or should we just break do a breakout okay cool breakout so we can stop the live stream for now and we do the breakout