 questions they're gonna try to go through but if any of you have a question you can just ask it or if you have a topic you're passionate about you can also just walk up here kick me out sit down and did the topic but if you don't even want to speak up you can also submit a question on the slide over there we'll try to take that in. I have to apologize in advance somehow I've been scheduled for two different panels that overlap of each other so I have to leave at 12.30 so we'll leave at 12.30 so whoever is noisy has to take my voice. Even though I said that we shouldn't be special I think you guys should just introduce yourself like in a one sentence whoever comes here should be the same. So Leo maybe you want to start. Hi everyone my name is Leo I work on I'm Johnson formerly Gith Co-developer and now I'm the E&S The Theory of Home Service. Hi I'm Draug and I work on Viper. Hi I'm Pablo I work on mostly dual EVM. I'm Alex I'm working on salinity and EWASM. I'm Casey I'm also working on EWASM. Alright let's jump in to some of these questions. Maybe start with a big one which is pricing of computational and arithmetic upcodes that has been I think you can start out today the discussion with EVM-1 and optimizations. I'm not sure if any of you guys have seen that talk earlier today but there was also a paper out that some instructions may be mispricing and in the Istanbul hard fork you may have seen the IP1884 which also suggests to repress a couple of instructions. So what do you guys think about instruction pricing? Is everything correct? I mean it's kind of impossible for it to be correct across all machines the best we can do is try and target an average machine but with different resources and different speed SSDs, different RAM architectures and so on it's never going to be completely correct. We just have to try and arrive at a pricing scheme that isn't so far off base that it can convince a subject to DOS attacks and that's why we have to do things like what EVM-4 in order to avoid those like the disruption to existing start contracts is a problem but it's an unavoidable one because the alternative is that we just sit with what we've got and then somebody figures out a contract that's costly and our direction to DOS is knowing together I can change that. Yeah just to add to it I think gas repicing is tough but it has to happen and I think it's also kind of where we want to go in the future of the project I mean depending on pre-compiles, pre-compiles and the adjustments we have to make to that but what I also want to highlight is we often say well we want neonatal performance or what's it supposed to call but we're also going to blockchain so we should keep that in mind that it's not always going to be neonatal performance and that's why we will have certain functions that will be completing this project but we want to do crypto in the evening. So I think there are actually two programs here like most of the reprising that happened and the papers are about and all of this is actually about accessing external data to EVM and like this is actually external to EVM at all I mean it's just a way to do it something from the outside let's say disk or something like memory and I think this is what we mostly struggle with on the other hand the computational offers are kind of highly priced comparing to the actually time it takes to execute them so we have like big safe margin here and I actually have a claim that even if we price the computational offers by the same amount all of them it wouldn't matter because it's still so high comparing to the other costs that actually it's not exploitable so I mean the ex opcode was exploited but it's kind of algorithm instead of something more so the worst case I think we have quadratic opcodes that do multiplication and division but still I think the margin is big enough and we are bounded by the decision that it's not a big deal from the security for the point of view but I think they could be priced lower if that helps for example to go more into a stateless contract so you can compute more instead of storing more it's said there's like the computational ones are highly worked as today and initially I thought you were also really happy with that but now you say that you would be happy to maybe lower them to allow more computation yeah I personally would be would be like to lower them but well we cannot price them currently like to each other but they're not like this other thing is they're not in terms of time they take to execute and the prices that's not also accurate at the moment but somehow it's not exploitable from the security point of view but yeah if you want more fair prices I think they should be definitely lower by by big margin I mean in the case of state modified opcodes the reason for their cost is not just execution time it's also state both concerns so if we use the cost we make it proportionally cheaper to store more data so I think we need to balance that as well we're talking about reducing the cost of just the computational ones right now I thought you were talking about reducing cost of... not exactly you could have another question about that actually you actually have another storage but this means that the two topics wouldn't just say that if you're moving towards stateless contracts then you wouldn't actually have to increase the price of the computational ones to make it reasonable otherwise it's going to be so expensive but it's just going to stay expensive in any case you can ask them I don't really know what they're like what the limit is and of course a period of time it's going to be different for each client the lower the cost we make of computational opcodes it's very high dispatch I don't need for the naive interpreters and so on and the higher the bar after writing the clients the main constraint is the price of call data so luckily it's great progress that the EIP for reducing the cost of call data is going into Istanbul if it wasn't for that that would be pretty pessimistic but that was great progress and it's like about halfway to where we want to go so if we do that again in the next work once the price proposed for Istanbul is the network is still stable if that's the case we can make progress and reduce it further then it will be well on the way but I think the other difficult part is going to be versioning and versioning will be necessary to prevent breaking existing contracts so but if we have versioning then we can propose new costs and it will and these new costs will actually be cheaper than the old costs and the existing contracts that are voting the state will eventually be based out because people will be incentivized to vote to the cheaper, stateless contracts yeah I think versioning is a great idea also because it kind of solves that problem of oh I have this contract and I need to migrate but how am I going to do this I'm some of them believe that the binding contracts are not upgradeable and in that case I really want versioning to agree with that so if the versioning gets done so is it a good idea or is it not? versioning would also solve the D663 with the swaps and dopes so maybe before we jump in are you guys depth developers or are there any depth developers here and are you guys writing contracts are you happy with the prices what are you not happy with well worth it but the gas price as in these two main transactions and you paid too much for them or if you had to wait different times I guess it's a double question right yeah so are you storing a lot of data is that the main challenge or just trying to compute something yeah I'm trying to compute contracts yes I recently deployed contracts which the calls were a signature and costed a lot of gas so I wasn't happy with that for deployment I mean yeah I think also the gas prices are very volatile sometimes it's like two grays find sometimes you pay 50 and you can't predict beforehand what's the lifetime it's not that we can say cheaper than you think and that's very frustrating yeah that's where EITM 15-59 comes in it's it's the gas market right yeah that's what I maintain we can improve the efficiency of the gas market effectively but it's always going to be highly analyzed because when you below 100% capacity it gets as cheap as the miners are going to accept and once it hits 100% it shoots up to like the price that's high enough to make some people reconsider it and not submit transactions do you think that has something to follow like these guys have some future there's the way you have to come to this market you don't have to I hate gas technology the problem is that we because mining is permissionless you can't really like sell a future and then be guaranteed to be able to redeem it you know the miner who is trying to redeem it might not be the miners that issue this and they have no reason to sort of accept that promise so nobody's come up with a set of model that works for that that's actually effective it's like exploiting an issue with the refund mechanics where you can basically force the miners to do twice as much work as you're paying for but it's not really scalable I mean I kind of wonder if we should introduce an opcode that does nothing but like consume gas and then later return it to you because it would be awful but at least it wouldn't be wasting storage the way gas token does I wish I was wearing my gas token to use it as an orientation today but the funny thing is it was worn about in 2015 in the lease authority you know report like analysis before Ethereum launched and well the way they worn it by was miner versus miner storage bombs and it's not exactly what gas token is but it has the same effect of looking at the story so if I could ask you a question it wouldn't make sense to just do a way of looking at how they affected what they're designed for do they do more damage than they do what's the date on well gas token would make sense but we wouldn't know the actual use of that yeah I mean broadly the intention is to incentivise deletion of data but I think it's a very, I don't have hard data but I think it's very weak sense of it that it may be almost completely ineffective but also it's somewhat related but the question is like we say that if we have stateless so then yes if we have a stateless mode then sure on that layer we can maybe price the gas differently but we're never doing the execution environment it's going to have the same storage problem it's a pricing problem are you mixing it with YouTube? well he mentioned I mentioned stateless contracts so that's the end like you can design your contract in a way it's trade-offs storage for computation right right a good example that is they just store a single hash for each channel on chain and any time you want to make a change to the channel you have to see that people are structuring the hash and they save a great deal of gas for that matter DNS's DNS's same oracle does the same thing and they store the hash at the DNS records because there's no reason to store the whole thing and I think that cut our gas consumption by more than half one the contract note on that is you could also like use incentives to do any calculations but kind of if you have like a two-party system that does trading or something and you have a formula that derives the middle points both parties on both sides actually have incentives to calculate the closest value and you could actually cash the value on storage and then make them constantly semi-contactious to get the actual value so yeah there's a few interesting stateless things you could do so Nick in the DNS sector oracle you said you semi-contacted the data needed to just arrive at the hash how big is the data you're submitting? on the order of your entry basically in fact I'd say like 512 bytes would make some size for DNS entry and so you you submit a record that contains a public key and it verifies that against the thing that it's signed with and then stores the hash of that and then when you submit the next one the sign that that key you submit that and you submit the previous record and verifies the hash and so on so you sort of chain it in that case but it's not a lot of data it's expensive and it's a lot cheaper and twice via call data and it is once via call data and store it forever we kind of have it a size that if you do have a lot of data you need to pass in and you need to do a certain amount of computation on them to arrive to the state you store there's like a trade of where it is way cheaper to store stuff on in this case obviously when you're passing in the redundant data the only computation you have to do is a simple kick-act to send some code over it to verify that it's the same as it was passed in and verified last time so we're not repeating a lot of computation because that verification is only done once and therefore it's actually good So I guess too bad for a panel that your use case works well The issue is when it requires Merkle proofs because then all that Merkle data is what makes the call data possible interesting you said currently just passing in just that just the DNS data it's half the cost of if you need to recheck then you don't need to order half the cost and that's about to be four times cheaper after Istanbul I suspect most of the remaining cost is our computation cost Do we still need ERC the code deposit limit I think it's ERC 170 or so because I Do you mean the contract size limit? Yes, exactly because I saw some discussion on the e-magician forum that there is some debate and I see more and more and I'm kind of running into this limit and this is really needed So the problem is that when you start executing a smart contract you have to load the whole thing off disk and also do some basic jump-test analysis for the whole thing and if the size of that is unbounded then something can submit a very large contract that only executes, say, two op-codes before exiting and the VM has to do a great deal of work for very little gas so that opens up a DOS vector some sort of moment is necessary maybe it could be increased Is this limited by the gas price anyway? No, because you pay you pay a fixed amount of gas for a core op-code but that doesn't matter that's the same amount of gas regardless of how much the contract you have but the amount of work we have to do at the wrong time varies depending on the size of the contract or the needs of the first op-codes So you need to change the cost of call to depend on the size of the contract or the VM implementation something to restructure to do all of the analysis at the point of time instead of all time and store it on DOS You don't have to store it forever Yes I kind of use that as a VM implementer because what they said is like when I actually build my software the code size will not be greater than that and I can have some assumptions about that because as Nick said there's some like analysis you have to do when you load the code and you're about to execute it but I think that way to kind of work around that some of these analysis can be done lazily but it's much more complicated what we can do now like just scan the code once and also this analysis can be cached but yeah I'm actually hugely exploiting that doing much more things doing these analysis because I know the code size won't be super huge when I get it so yeah it's limited by the gas price so you cannot deploy big contracts because like you pay for buy to deploy that but for me it's not really information because this changes later on so when I actually like make design decisions doing yeah implementing EVM this is not like strict grantee that I mean I cannot guess what is like the reasonable size so that helps yeah but I think if like it's problem for you I mean I guess there might be the way to figure out a solution it's not easy I mean it's like there are some trade-offs and we have to consider that is your software limited to 65k code size the current limit is like 2024k something like that yeah but your software yeah I think that like I know that for example what was it for example you can you know how if you like let's say you sum up of the cost of all the code I know it will not exceed like 32 bit value because it's like the maximum code opcode cost is create which is 32k multiplied by the code size it's still within 32 bit limit things like that so I know I can store like this information within 32 bit limit that's many things like this so you have at least you have at least this I also can actually generate like the worst cases for this analysis and check how much time will it take so like this is some kind of security precaution like yeah in the worst case and it's like the the worst possible order of instructions that the analysis will take the longer time it's still within some some like the reasonable limits in terms of time spent on it Is it a recommendation to deal with you know the code size limit to deploy libraries and use to look at call but that's actually one of my questions so you have at least two limits in your codes you may have two limits one is this assumption for the gas counter what takes you to 32 bits and then maybe you could also just use a 16 bit value for the code offsets is it possible other implementations do the same yeah you think you may be familiar with the commitment implementation yeah I mean they definitely the optimizations a while ago was making some of the things such as the gas counter 32 or 64 bits which is the implementation but I mean you can certainly I mean that comes down more to total gas limit than it does to code size but there are definitely there are definitely optimizations of that sort but I think mostly they could support a much larger code size before these options started being involved I think the this is nothing magic about 24K but I think we need that order of magnitude and I think we need to be quite cautious if we talk about expanding it I mean I feel also like large contracts that are too large to fit into 24K are kind of a code smell I'm not saying no use for contract is larger than that because it's definitely not true but it's often the case that it represents trying to make a contract do too much or that you should be modularizing it more and then doing this it was about complainers I don't think it's I think it's there are definitely contracts that run to this for legitimate reasons but I think that your first reaction will be run until it should be to do a review of your code and say how am I trying to grant way too much stuff into the EVM or on chain and is the stuff I can take off chain is the stuff I can log instead of computing and storing things like that or modularize it better so in the case of the DNS set contracts for instance there's a bunch of DNS algorithms and each of those is a set contract that goes out to which makes it more flexible and also ensures that you're not trying to crown it all into a single contract is the limit imposed on the init bytecode when I upload the stuff or is it imposed on the code deposit actually the deployed bytecode is what it's imposed on the latter which also means that the gas limit isn't as much of a limit as you might think because it doesn't all have to come from all that and you can generate you can run a contract and generate some sort of data. But I'm talking about the blockage limit is the limiting factor for the code deposit anyway so currently when the limit was imposed blockage limit was like 4 million and that was the mark and now it's at 8 million or 10 million and so let's be double the the amount of gas rewarded for the contract hasn't changed so we'd still be trying to do more work for the same amount of gas if we increase the code size a lot. Does modularising it make it more expensive to run does the EVM do inlining on the back end what are the cost considerations of it? So it can make it it will often make it more expensive to run but it doesn't mean it's not going to you do modularisation and what you do in some cases maybe you're not going to have the contract to be split into several different ones but you actually can make it cheaper because there's smaller dispatch tables and so forth. Are there tools that help with this kind of factoring is that not what I'm aware of but but actually the dispatch table was optimising it, it's a binary search now. So is it a login instead of a linear improvement just having several contracts to call? Yeah, so I also have my own theory about the idea that we were like supposed to have actually a bunch of contracts that we can reuse repeatedly, kind of like a DLL or a shared library can I go into that question? Initially so if you look at the way it was split up there was code code to a library which turned out to be a better Yeah, so I still like that idea if I think about how I use code contracts and stuff and I think what I've been battling with is actually the fact that there's a static cost to making a call and it's like same amount of gas and it's quite high and it doesn't really reflect what you do when you're actually running and so what I would like to see is something more like what your operating system does which is it does the computation of loading your your red X library from dusk and once it's a memory you can just recall it and you get that market show like in the transaction the first call is priced at something and then maybe it's to something near scale as you repeat it for the user Well this is the call of the equivalent of need gas featuring for the store of codes and I think it makes sense for the same reasons the other option is that we could introduce a system to allow that people could propose well-used contracts as sort of effectively built-ins and you can say, I mean my own personal PEPA DNS registry is now a built-in and it costs 50 years to call instead of 700 because we expect it to be called a lot and held in memory like real clients and my pet one is of course SAFNA which is like everybody uses it and if it was cheap enough you could make it a pre-compiled system yeah Are you suggesting to move this into like special contracts or are you suggesting some kind of advanced caching algorithm built-ins? The way I'm proposing would in an EIP we impose that this contract of this address is now considered a special contract and has special passatory rules and individual presentations can choose to re-uncommit that more advisable things loaded to be very open but you're not getting in front of time it's kind of the awesome approach of pre-compilable now just a walking code except for the immediate Yeah, I mean you can start off just by doing the gas price the clients can see how it is used so we can take a very safe approach but to type into the size of the problem people don't want to pay that 700 cash repeatedly in a bulk fashion and if we have that cost that just decreases as it gets called they will only pay the 700 and then maybe it gets like stepped up and it will solve that problem Today the main place you delegate call will be used is for the deployable properties except for one in gas as it's affected there but it's just not affected or outlined to libraries which is what we actually wanted to So Martin Do you want to introduce yourself very quickly? Yeah, I'm Martin Svendan I work with the gas client and security at the Ethereum Foundation but for the Ethereum infrastructure and the EVM and I was at another panel So we were just discussing code pricing and pricing in general and what we ended up with I think Nick proposing that some regularly used contracts and libraries would benefit from the lower cost Right So I have a problem with there is a general thing sometimes that caching clients have caches so we should make the things that are in the caches cheaper and my problem with that is that caches allow a level of freedom to have whatever cash eviction policy we want it can be least frequently used to last reasonably used caches and if you have a big machine you will have a large cache and if you're running it on a Raspberry Pi you have a very small cache and it will be slower but if we say the cache should be this and this should be cheaper then we all of a sudden encode the cache into the eviction policy into the consensus engine and we encode cache size into the consensus engine and we need to make sure that if something is put into this cache and it's been reverted we need to clean it up from this cache again so it's not sitting around we need to have a journal covering this cache and basically it's no longer a cache but we've added a new consensus structure with very strict rules on what needs to be in them and what needs to not be in there and it's removed so the whole cache has become just another consensus structure which doesn't help the clients but make it more complicated so the question is if we want to make it cheaper I don't think we should have like anything executed in the last 10,000 it need to be very clear about what should be cheaper and why what kind of data structure do we need to maintain to I agree and I'm actually against caches in general on the EVM and it's very easily attacked by an attacker who knows the eviction or the likely eviction policy we saw this in Chang'ai and we've seen it elsewhere all they need to do is make sure that every item they fetches out of the cache so they work when you don't really need them and they fail when you need them most and you always have to assume the worst case that everything is out of the cache and therefore you can't actually optimise your stuff further the best you can do is when times are good your sync is slightly faster when you're catching up or maybe a lot faster what I was proposing is one of two things one is effectively an extension of net gas meter to apply to co-operations so if you call the same thing repeatedly in a single transaction it costs less than a second in subsequent times the other option is a way for EIPs to say the contract with bytecode s affects the address why is considered a public good and is used regularly because it's saying safe math code or something and executing the call-off code to this will only cost 20 gas because we expect every client to load it into memory on start-up and it will be a very limited set of small contracts that work the same as basically standard requirements so on the first idea I think it would make sense but probably be maybe a limited usability and on the second idea I think that would be yeah really that would probably be really cool and useful might be more useful than you think because it would enable a library that has a series of utility functions that you call repeatedly from other neural contracts whereas current managed calls are going to cost you 700 gas so you can defer a lot of your implementation up to a library of that gas so the first one kind of is like net gas filtering for S-Tor and actually there are two VIPs in that direction at least right not for the S-Tor but for the calls themselves there are already two out there and the first one is actually by Jack and me regarding calling self this was an issue in a library and the other one is well actually this one was discussed and all called probably over a year ago and there the exact same idea came out that maybe it should be a generic net gas filtering for calls within an execution but it hasn't progressed anywhere and aren't you guys a bit tiny bit afraid that it might fall into the same issue as the net gas filtering for S-Tor? Yes I think Martin made a very good point earlier today about how the gas student is a broken way to achieve or a goal and that maybe we should be looking at ways to replace that so we're not so fragile to deal with those issues every time we change the gas process I don't know if you ought to go into detail on that I think we are just going on old directions to me, I wish you there just give that 4 and 1 second but to tie back to that if you get a cheaper call the next time you call something that you've already called that's like a more generic version of an EVE that Alex had for a cheaper call to self and I took that EVE and I did an analysis I don't know if we can get it on the screen there but the analysis looked at if you had a somewhat malicious swap which repeatedly calls itself as many times as it can first so that means it calls itself recursively to a limit where it can't do that any longer and when it runs out of gas there it will do it again so it will be like a call tree down a call tree down what am I looking for github.comholerman slash and then go that EVE or maybe this one where I'm not sure go EVE remove all the slashes anyway that happens the last one this is one of the analysis this is Seaton search for more buy five more down buy five press down better have github learning music is alright so yeah the contract will basically there is one contract which calls another contract the first contract just calls the second one all the time and what the second one does is it calls itself in the case of reduced costs to something that has already been called this will fall in the same general framework and this is how it basically looks so first you have a person call tree to a certain depth and this exceeds around 10 million gas and in total there are 56 iterations where it goes to a depth of 344 and the total of 13,000 costs can be made on 10 million gas and if the X6E was implemented where the reduced will be to 14,000 there will be more than 405 iterations and the maximum depth to 503 and what you can see was that on gas without any state or anything only the pure gas run time the execution time jumped up to 1.18 seconds I'm not sure if it must be 4,000 but before it was 4,000 and then it became 171,000 costs in total so that's pretty steep difference but I mean you could adjust the gas price right so so the number 40 is definitely so 64 64 that's what stops the recursion from reaching 1,024 but when that recursion then unfolds all those 164s are returned and we can go one level down and do the recursion again as will be a shorter recursion because what less to start with so there's no way to use an explicit call to do the recursion again as will be shorter recursion so there's no way to use an explicit call to does well up here but just to be clear the proposed 14 or whatever was the gas cost it was just starting number which we should benchmark but this is always good so if you have this just repricing four calls that you make more often you're going to have more calls you can make more calls but there is an inherent I mean even if you call something that's already been called there is an inherent overhead in the the question is like how bad is this like it's fundamental right right but it's super expensive compared to a jump and then you get into the incentive to actually write modular contracts instead of having like a huge contract where all the internal calls are just jumps right so because you were discussing before that you the repricing of call would affect that so you should write for modular contracts which are easier to understand like it's probably safer in general but if you're very extreme in non-savvy gas you would just write a huge contract and just do internal calls to the functions and then you would just jump between them but then there's incentive to write modular contracts against single contract but that's inherent I mean jump is inherently cheaper for the event execute right yeah of course but I'm just like from the perspective of the incentive for coders to write modular contracts that's the perspective can be backtracked a time a bit you weren't here at the beginning and I know that some people even had questions of the previous part but we started off with one single question the single question was why is the code seismic at 2014 and could we extend that and it has a lot of different topics we can discuss and we did quite a few of them one we suggested that it makes sense to break up contracts to make them more modular but that has the issue of the cost of call and in all these overheads and and we also explained that the code size limit is there because of an issue of analysis cost and it's a US factor and I know there was a question probably there which whenever answered can we move the analysis over here at deploy time do we want to cover the briefing I mean it's certainly possible to do the analysis at deploy time and in that case the deployer is already paying gas proportionally to the size of the contracts or in some sense they're paying for it at least there's no discrepancy in the goes there but I think it would be a matter of it would be a sort of a change to all the contract at all the EVM implementations to be able to serialize that analysis and then efficiently load it up again at execution time even if we do that we would incentivize people to create bigger contracts at least we would raise the limit I mean I don't know that you'd incentivize me to be making it practical or possible and there's still definitely some OEN over here that only in contract but it's a lot lower when you don't have to step over all the bytecode but let's assume we would we would make this easier and we would raise the code size limit but we wouldn't do anything on the core costs then we would be just expanding this to make more money with the contracts and the alternative to that is maybe consider making modular contracts more attractive and that's what we have been discussing right now so for instance make the cost of the core code to be on the size of the contract but I mean I think that in fact it would be kind of similar to you know we disinvise every time we recall it it would be possible to have in my contract the ability to indicate that a core code could be inline so I can still write a modular leave but I'm not paying the price of a full call because at load time there's some hint there saying you know this is going to be called free eventually you're going to load this external contract exactly like I want to write a modular leave but I don't want to pay this cost so inline this call it seems more like a compiler optimization the difference would be that when I deploy my contract I'm not inline the code is not inline in the contract it's an indication to the EVM at load time so I don't pay the cost of this loaded contract then the address has to be static but then you're making the EVM pay the cost so why wouldn't you be deducted it I'm paying some overhead cost to indicate inline but I'm not paying that cost every time I'm making the call it's a one time I think that works out to do a big gas meter and that's probably the small change if I can back up just one second I'm curious you showed times there about 1.2 seconds for a block full of these recursive calls what's our target what's the worst possible case at the moment for a bad contract to execute so 1.1 is really good all that is very dependent on the hardware you're running on and in this case I just ran it off my laptop and I expect to be able to run through 10 million gas in 200 or 300 milliseconds so in that context yeah I should have explained that of course but in that context well about one second is not good but is that the worst case because if there are worse ones really worst case in that particular on that particular hardware can you write a contract that takes all of them to execute so that's a very difficult question to answer because it depends on the state size I mean you can't say no you can't but I thought I'm asking if you're a wheel but what I'm saying is if there are worse ones then we shouldn't really be fixing the other one because they could just do the other thing which would be worse for us yeah so if we have other worst cases to fix something just as long as it's not that bad that's the worst case more or less so if you have something that you can already do now a day that is actually worse than this one setting more quite low setting more like if you can already do more damage then why is this one a problem at the very least one seems I'm not saying I agree with this at the very least it would be introducing a new door DOS vector if that's the case if we already have ones we should be getting rid of those but we wouldn't be making things worse yeah but once we get rid of those then suddenly we fall back to that so I'm just asking how far away this is from a reasonable position well I think we can go in that general direction but we can't like pretend that a call is equivalent to a jump because it isn't it isn't I don't think it's similar to soft as one said is this a gas thing that it would be because this is using plus of 40 is that right? so it would be that would be like 200 yeah that's configuration so then I think you have actually two different does it with the precompile stuff I don't want for the precompile but anyway this is only for a call to self which was motivated by Viper and Viper initially for every single function defining Viper it would do an external call so that to make those functions fully pure and this has changed I believe last year and the last year this year and Viper is also using jumps for internal calls because obviously it's not cheap well we want people to use the language so so the motivation of this proposal was to reduce the call to self so that Viper could keep using this safer method of pure functions and also Solidity could start using that exactly I'm not a big fan of that because it requires you to serialize all your call arguments every time and de-serialize them so even if call to self calls were cheaply you'd probably be spending a lot of time on passing and generating but especially on serialized contracts it would probably you would have to navigate the whole 63-64th which would cause issues so every time you make a call you can only send along maximum of 63-64th of the gas that you have which makes it practically impossible to reach all recursion higher than a couple hundred something like that 344 yeah so a contract which is in a contract world where everything is based on calls instead of jumps that can be problematic on the note of serialization so we do have the ABI encoding across different contracts so if you're calling yourself you may not need to use it you can use your own form which may match the map but you do still need to flatten things a bit like if you've got a data structure that has pointers to all the things out but you can't just see all of your memories silver vacant you know may I ask a question about execution environments in a theory of two would that be on topic thank you so we have a ton of questions on Slido now do we still want to export this net gas featuring four calls a bit more or should we jump into some questions? I just have a question regarding it which relates to what Martin was saying about the 63-64th I mean this is a fairly special case that everybody can consider making a new law firm a call cell pop-up like a call cell pop-up yeah I was just thinking that's the 63-64th I guess it's not I feel like given the the codes nice features that might be because it comes to triviality more in the calls like that I mean lots of becoming apparent is that 16-0 codes for call types was not enough okay so that's just focusing on the call call to self and if we start introducing an output for that we really made it a very specific case but earlier on we also said that this could be more generalized outside of call to self to to discount calls to contracts which were already called in the given execution and that's the one which would be useful if you want to have more modular contracts with libraries that would be more self that would be good but I think yes I think this analysis probably takes you as well I think it would be functioning very well right so do you guys see any issues considering this more general case I use the question that you need to do a good analysis of how much can you force the heavy implementation to load like if you take the square root because you can make and you can call that many contracts that many times how much can you remember that would that be a problem because the heavy implementation has to be all over the place because you can't run caching because the aforementioned attack will just overrun your caching and intuitively to me it seems like if that's the highest level it would be the same as what I was saying so an interesting case there is a brand new contract to use a bunch of libraries you keep calling them you cache the top of them and now you call out to a brand new contract and then you want to also cache stuff which is happening there are you I mean you cannot just immediately do the old stuff anymore now your cache needs to be relevant to the current address so you might take potentially more should that be the case or shouldn't you be able to reuse the one as well because they've already loaded no it's just I mean I really like the idea but I'm really worried that it's going to fall into the same problems as the NetGaspery you talk specifically about the re-intrancy issue NetGaspery has the re-intrancy issue but apart from the re-intrancy it was just quite complex and there were so many versions of NetGaspery and it takes forever to and even in the last hard fork coming up to December with the NetGas metering it seems to be hard to get people to write down a clear specification and to be a test cases for them and I mean I wrote the very first version of NetGas metering and with respect I wish I'd be less focused on trying to get the new NetGas to exactly reflect what the actual costs are and more just to low input a simpler application that captures some of the efficiencies rather than a more complex one that tries to reflect all of it so maybe that's the approach that needs to be taken I'm not actually up to date on the latest variant of the proposal I think the latest one has some extra an extra rule just to get around with the state of the issue I would have questions Martin I guess so I think there was like an article or paper some time ago just showing that I remember a percentage with like a really high percentage of contracts that are actually the same and I guess most of these are probably SafeMap and similar contracts like what would this whole thing that we're talking about the re-pricing of the call would be to A developers not really to make it easier for the clients what would be the difference for the client so I guess okay one case right now have all these contracts being the same of course I don't assume that the client stores all of them separately but then if you have people start writing more modular contracts and just delegating calls then you're going to have a theory of them deployed so what would look different in the client in that case or would it be basically the same Certainly it's already actually by probably the storage in that key value store or a content address store so it's only store markets so I don't think it would be consistently I have reasons for it to be the most popular contract on the theory of the one with the most avoid properties as a guest open tool followed by a couple of proxy contracts mostly for changes to be used to contract more received deposits followed I think by it's one other and then there's the D to contract and of course we shouldn't be rating on these ones in particular because they're of course actually they pay not less than they're actually the costs they're actually they pay every time the contract stored as if it was a brand new one they're not using any more storage So the clients currently store the contracts only once but that's not something which we should then try and or assume because there are there are it might change over time in that way No, I was thinking primarily because if they are stored if they're right now they're dedubicated and if they're undedubicated that makes it possible to have another representation which is more amenable to things like leave syncing or accessing everything pertaining to a specific account in one this guy over or there can be things that are better representing it that way I mean presently it isn't trying because the sync protocol requires that fetch a contract The current first protocol, yes but that's not being just to the data point by my math currently storing all the contracts and acquiring them without counting for over here that counts for a amount of megabyte and we stored all the duplicate contracts you know, we've been undedubicated a little bit I know compared to the 250 for state that's not enormous right it's actually impossible to delete things from a dedubicated store at least we start doing reference counting which should have been built into the product how does creative think about it? I don't think it affects the end result I mean it probably might be a reference counting or more complex I guess but since when most clients are doing that now my contract storage is a lot cheaper is stored storage and I don't think it's illegitimate to want to store and frequently change data at the end but if people start doing that then we are going to run to issues we need to start garbage collecting all the contracts and they need to start reference counting and so on In the garbage collecting all the contracts there is with the reficing coming up for S though there is this other discovery that it is cheaper if you have a lot of linear data it is cheaper to store this contract and use Xcode copy or even to call it and ask it to return some of its own bytecode but it's in the case when it's just pure data and you Xcode copy it then reference counting I mean much like S token this is something for something to exploit or use now unlike S token I don't think it's a bit bad I think it's just that it reveals an area where nobody's bothered to optimize because previously it hasn't been enough of an issue to spend time on but I think that the costs of contract storage roughly reflects the actual costs of the current clients that it's cheaper because it actually consumes less resources to restore where you've got this massive but this also opens up the question why is the storage value not the key but the storage value limited to the two bytes because if you could have more linear data there and I storage should absolutely have been a page table type set up with large pages so it would cost a lot to load the first value in a given page and a very little to load subsequent ones but this storage works it's cheap to load a type of a trade-off point on modern machines about 128k that's when you're spending half your time waiting for a feature to return and half the time receiving the data and that would be a much more sensible size than the clients who fork had putting back now into the EVMD stream you could probably do it without changing the outcome so you just make your store a lot more expensive for the first feature on a page and a cheaper than the other but we've got hundreds of thousands of contracts deployed that rely on being able to use contract storage on a massive hash table and they would all become over nine, ten times more expensive can you explain a little bit more what you mean by the page table method how would that work on the level? Effectively instead of treating contract storage as a big 32 bytes 32 byte NAT you would treat as a series of pages and so when you do a fetch you basically treat the last few bits as the the position of the page and the first bits as which page it is and then you you know with the first time you fetch a given page you charge them a lot of gas for that and the subsequent maybe two or three, five times what the currently slow costs but then the subsequent times when that page is already in memory you charge only, you know, comparable to in-load costs to load of ours which is store it back potentially and so the old codes could be the same it's still just this load this store it's just treating them semantically it's treating the value differently So it would still load 32 bytes at a time? It would still return you 32 bytes it would load 128k and then just give you 32 bytes you ask for and as long as you go within the same page you would only charge much of the first but that would radically change how languages needed to store data instead of using the entire storage like hash table you know things like red light trees and so on and also store stuff in contiguous ranges A similar idea was explored to some I guess to the extent by Alexia Tupo-Geth at least on an idea level and we discussed it a similar but definitely the same instead of using sload to have pages it would be memory mapping the entire storage I'd think that would a similar effect and maybe I don't know whether it would be a better or worse API but it's a sensible approach but there's one trade-off here it pushes it pushes all the code to do hash tables to the language well so it definitely shouldn't be hash tables because hash tables they're amortized and every time you need to expand the hash table you have to do OE work which means that every 10,000 or every 10,000 transactions and that's what needs to use more yes than the bottom limit of the contract so if you're going to do this you have to do something that is fixed O log n time like a red light tree and it doesn't even need to do that in a language or in a recompiler you could do the archic recompiler type thing to do this but this is far from what you need to do the cease table library so it's the Java standard library so it's everywhere else it's possible to use it to do that I'm coming from a developer experience point of view a lot of contracts really rely on a mapping yes that's like the core data structurally used they can still have that it's just the language we have to do more work behind the scenes yeah I don't think it's an issue that the language would have to do that it would be just an initial investment that all of these languages would have to write something and then perhaps people would complain but still somebody would optimize it somebody magically optimize it but even then as you mentioned either it would be in the bytecode or you would have to generalize it into some kind of library or recompiler and he ended up with the same issues as we discussed before so it seems to be all of these you want to change the kind of thing and you have suddenly a bunch of other parameters that you have to think about you have gas prices sure but while we're at it I'll just say that memory should also have been a page table exactly I read it it started at the beginning and paid to expand it linearly we could write much more natural compilers if we could have a one end and a stack of the other and so on yeah so this was I think the Irish bubble was here now because the bubble was discussing it with a few weeks ago so Daniel was like why is it not pages and then just wondering if this was discussed before and then I don't remember exactly maybe to give some background to you guys this was a discussion on the Solidity Dev Channel which is open to anyone to join it's mostly about the compiler but it has some of these kind of interesting questions and it originated from memory management and memory management the compiler I mean in the contracts but it's written by the compiler and you can only expand the memory if you cannot do anything else and in any given function you may want to just use temporary memory and the issue was that we wanted to maybe reduce memory usage because memory is expensive and you want to throw away the temporary memory but it's kind of a challenge if you want to do this in a generic way because you may use so you have your starting memory or the use to a certain extent you get into a fall you want to use some temporary memory but at some point you may also want to store data which wouldn't be temporary and it becomes a challenge but if you have page tables then you can do that It occurs to me for some reason I knew this very moment that we could actually change that without big impacts on existing contracts because if we're using page tables the same gas cost for the total amount of memory you're using, rather than the first invites that existing contracts will continue to operate the way they are but new contracts could take advantage of the fact that it's something they could write to weigh the egg off somewhere The proposal Daniel came up with during this discussion so we're talking about the memory page tables which you had an opinion on the proposal was to maybe use some high bits of the offset to indicate the page table What's your opinion on that now? About the high bits? Well just to have page tables in memory you're concerned on the price We're right Should I open up the journal? So actually I don't remember that but anyway I think in general I think there will be a lot of nice features to have but I'm sure it's practical to actually introduce that so unless you find a way to do it in a way that it will not affect existing contracts I think I have a way to do that Yeah I understand that But on the high bits I don't like using high bits who win the 256 values because when you actually implement stuff you actually can just trim it to 32 bits in terms of memory access and then I have to actually consider the upper part of the value but maybe it's not so big deal what I usually do when there's a proposal I try to make a prototype of orientation and then I can actually comment how it would affect my implementation at the end so I kind of work around that but my first choice would be to use even if we have some bit masking differentiate addresses or anything else from each other within the same type or something to use some high bits by within the 64 bit range It's least interesting this one so don't film sorry I have not considered before I don't think I have I don't agree on the heap if there's an incorrect right one maybe I can speak just do we cover some of the questions from this item because there are a lot of questions there we're still talking about 32 bit memory all these issues are interrelated right don't you feel like contract size that's what started we should maybe I think the number wasn't related to that anymore that was the first one that came from the story so one of the questions we have here is which still relates to the earlier discussion on status contracts why are zero bytes in the transaction data cheaper than non-zero bytes they're not as of Istanbul they're priced equally aren't they? no maybe they went down to 16 from 68 but only for a transaction what other types of data we're talking about that's the only way but they're not general memory just transaction so some context around this when it's in a transaction it has historically been the case that the data that goes into a transaction I think that if it's a byte is zero it costs four and if it's non-zero it costs 68 gas now for calls that happen within yeah with it from a contract to another contract there is no such distinction what you pay for is memory expansion if if any memory expansion happens so you can send a long megabyte of data but not actually cost anything in the gas if you have already expanded the memory to that megabyte and yeah so with Istanbul we're lowering this cost of 68 which concerns only the outermost transaction lowering it to 16 and the initial reasoning to have a different differentiation between zero and non-zero is that zeros are more compressible as far as I understand that was the reasoning and it was assumed there would be a compression algorithm adapted to some sort but it wasn't really adapted to a snappy right it was introduced on the e-flare with the directional snap sitting around maybe one half years ago or something yeah I don't think five zero would be more compressible than any other number but ones of the same numbers would be compressible I guess the assumption was that yeah I think the assumption that zero is happening otherwise there would have to be a more intricate scheme to figure out also when you have padded data it's a bunch of zeros and you probably have that a lot yeah yeah but I think ABI iconic wasn't existing but that actually that decision was made yeah because it predates the solidity effect which the transaction zero is we have another question still relating to awkward pricing and we kind of exported and I think somebody wanted to ask that question but should awkward pricing be updated every hard work using some benchmarks so should we always reprice things to what they are at that it would be interesting to have a set of benchmarks that will always run and reevaluate so always adding things to it as well would that mean we would need a fixed set of benchmarks first so when I think still we have to split state access and computational upcodes I think that they should be addressed differently it has some mind some idea how to maybe try to evaluate all computational upcodes in a more systematic way to actually have some kind of benchmarks to run on it that could be these would have something like a script to reproduce that I mean that's what the hard part is designing that benchmark analysis like the one Martin did for to determine that S load was the one that these guys did the paper that came out 72 second block once we have those we can start repeating systematically running these benchmarks but we haven't really had these analyses until quite recently so my ideal situation would be like we have some kind of say algorithm or program that actually we can run and then it outputs numbers and then we can discuss about the algorithm and the program not like the numbers themselves so like maybe we need to change the algorithm and stuff like that mostly talk about the script produce the numbers not like the numbers that would be I think ideal the best we can have within the ADM but if it's done whenever it's doable I think I think nobody is working on that at this point I think that technically it would be ideal to do repricings as soft as we can but then there's this like social aspect that we actually do screw up for people when we do it so it needs to be done a little bit oh yeah okay I didn't mean actually like we should actually use that to reprice a little hard for it but these like check yeah I'm making the question from the slide I'm also on this opinion like unless we need it I think it's not worth to do that and for computational opcodes we have like big margin of safety at this point so on one side coming up to Istanbul probably mostly triggered by your EIP repricing slope a lot of people were concerned that it's going to it's going to break the contracts or it's going to be more expensive for them so there seems to be a really big big clash between those people who ride the clients and try to ensure that prices are so much reflective to what is happening compared to the people who use these opcodes how can any of this resolve or get any closer anyone should people just accept yeah I don't think my version wouldn't have solved anything in an EIP or case where I mean you can't really op in to like yeah I want to use the more expensive opcodes well yeah but like at least you protect the older contracts this way and then if you want to deploy after the new rules then you have to just and then you can like be responsible but then you're responsible for whatever the deploying but then like if you change something that affects what's when you deploy it does get complicated too when a new version is calling a contract that was deployed with the old version it's an easy answer to say yeah and it's difficult like if you have a if you deploy an old style deployer contract which can deploy any contract you throw at it and then in the new world they will deploy it with the old rules or the new ones that's a good point it depends also so why would anybody opt in for higher gas prices they've opted for the lower gas price you know probably a really cheap competition I mean you assume that say a new version which you can opt in would have things which are cheaper and things which are more expensive and they would opt in because some are cheaper right but the things that are cheaper can accomplish the same thing as the things that are more expensive in the old version like you know set up bringing data from aslo you can create it from the call data but I think they meant that it's not opt in but it's mandatory but it doesn't apply to new contracts yeah so there were a couple of different versions of these version proposals and one of them would say that you could select the version you're deploying and Adam was saying that you could only deploy the new version you can deploy the old one but the things were when you within the contract and already deployed contract if you want to create another contract can you define the version there or should you take the version of the contract you have in then we can appropriate it yeah and then what happens when you call contracts of different versions what kind of rules are found there yeah but I think that was figured out I mean we selected one of these versions we just decided not to actually enable that but I think there is I don't think it was fully resolved what was probably resolved is that the current set of contracts deployed are version 1 and then you have version 2 and if you deploy something there's only going to be version 2 and if you create something it's going to be the version you are in but the question how you deal with calling another version that wasn't decided at all implementation of that when you call a raw version you just execute the old version old contract that makes sense because it was close to be in the end it was implemented I'm not a big fan of versioning in general but yeah what scares me about versioning is we say we have the versioning there there's more versioning this one is kind of unique now what I wanted to say what scares me is if there would be a new version at every card fork and then all of a sudden we have bi versions that can run currently that's just way too complex one new version and that's one where we continually update so if anybody assumes that it's constant well if you're on that new version you know, get rid of the 2300 gas subsidy and all that kind of stuff if it's all in the new version then we only have two versions so you're saying whatever is there today would be locked in in the state and then from now on everybody agrees that gas prices are going to change so that would be so it doesn't matter but then the kid end up basically writing a contract that has current new rules and then they've got to make the program of rules because you opted in admins to the new new rules yeah I mean there's a way to ideal would be I don't know if people can write things so that gas cost changes are going to break them there was this other aspect to it that there might be two different kinds of gas changes there one set of gas changes is where you want to make some features better and compensate against some other things but the other set of changes is fixing something against the US vector so something pressing the issue which you cannot just do in version 2 and leave version 1 alone because you can already exploit that in version 1 and people said that those kind of changes you can still apply in the whole version that's kind of confusing I have one person for opcode for each opcode you can say which version do you anticipate it's possible that there are various opcodes whose gas prices can change by order of magnitudes in the future because in my opinion in regards to this proposal where a smart contract developer could opt in to a gas system where gas prices do change I think it would be like a code smell if the code you write today is already close to getting the block gas limit in any certain transaction and so your transactions should be much smaller than that in general so if the changes are marginal I think that would be fine for most developers but I mean is it possible that certain opcodes could have their gas prices changed by words of magnitude that are up or down? I think it's unlikely it happened actually I think as long as it was 50 and then raised to 200 it's not really an order of magnitude call was 40 call was 40 raised to 700 and as long as it's going to go up to 800 from 200 but still that's an order of magnitude from the first month benchmarks this morning then you would see there's room for like order of magnitude price changes can I know a bit more? yeah I just did some research recently and just looking at execution times from 0.8 million on full of instructions and you can see like all the size dependent instructions have been increasing so I'd expect that the gas price over time will also increase nearly but we can go down I mean it's like lowering the gas cost so I'm sure it's actually safer I mean it looks like because if you had some assumptions and like everything else gets cheaper than like the single opcode stays it might be better way much did that ever happen before? no that's me doing it is oh yeah yeah we've been doing it is and that translates into this type of issue freaking bugs have been much cheaper you know I mostly like talking about not state access because that's what I mean if you want to lower the computational opcodes so like there's two options like price everything up because we don't have like actually lots of space here because they are like two or three something this is the values so like price everything by 10 but then like I think there will be a lot of issues to the existing contracts to handle that because all of this like gas passing to different cores and so on so actually we can like lower them to something that is fractional value right and I guess that's the way to implement that yeah in case there is a need for fractional gas yeah and there was a discussion there about that why don't you just the problem is that a lot of the arithmetic opcodes are priced at one or two or three or is it five anyway they're priced at one and you can't really be used one anymore but you can try I mean I think like I would actually consider option two price everything at one I mean it's much easier than go with fractional but then someone is going to write a contract that has the one that actually does take more competition than like I mean you only got the computation of this yeah yeah so you think there's no way that if everything is priced at one that someone is going to find some like division I mean it's quite expensive not if you're sure like it's actually cost like 0.01 even the worst case so you make sure all the arithmetic is actually one it's lower like save us a lot of work yeah that's true I'm not sure like the impact of that is good enough to actually justify it I mean if it gets substantial cheap of that so maybe a quick question to you guys because we were like quite deep into this do you have any questions or should we should we really call it today I have a question it's more related about so what are your thoughts about the storage layout because what I'm trying to do is how can I get the storage of a contract like inspect the storage very easily and I think right now you need to go to the initial deployment look for all the transactions get all the traces of the contract and then you can do any computation or is there any work towards how to inspect easily this type of contract there is actually I think it's not a working group yet but there have been a bunch of proposals at least on solidity that the compiler would output the map of the storage and you would use that in debugging tools there has been a proposal and nothing has been implemented yet but this might happen okay there are often debug methods that don't the storage of a contract address but I don't know I can't remember if there are RPC calls for that yes there is I think so and it's probably only on the Gath of Clients I'm not sure range app debug storage range app allows you to iterate over the storage of a contract I don't I suspect that you won't get it from Infura but you won't know too long probably not I don't think they expose you cannot get traces from Infura no it's not traces it's just the storage slot but it can by a myth rule which uses Infira somehow yeah it was implemented in Gath for the use of ratings I'm not sure you can by a myth rule the main end address and the slot that you want and it just keeps it alive I think the problem was you need to figure out the slots you want you want to dump out everything I want to know the layout but on the solidity well on any contract to visualize it so it has to be the input is the contract address and I want to get but you want to get the values that are there at the moment yes, what is in the contract yeah storage range app is the RPC app the whole problem though is you want to know what indices balance mapping then you would need to know all the solidity locations that the map ends up patching to yeah you need both static storage layout like what you want remix has the and then you want to see what exactly is the one to know what slot you're interested in and the other one to know what's there but then even if you have the method by which solidity reduces the key you also need to have access to history which ones have been written no if you iterate the storage try and I think that's what storage range app does but you can't like original key because you hashed the key of the mapping so there's a pre-image dump as well so it saves the pre-image keys and I'm remembering all this because that was something that was bugging me and now it's a feature and I helped implement that feature with the storage mapping so remix has this state yeah and you have to show the state map or so it maps the storage keys to the solidity variable names so I think I understand you need to actually free things so in this storage map what do you call it right storage layout you need account range app and also you see the pre-image of the hashed if you combine all these after one year you might get I mean you can get the storage out but you won't know the keys yeah you could mine them you could figure out which ones you're interested in anyway but we'll go through the traces and see what I think remix automatically does that no I mean yeah remix does that but just for one single the current transaction if you want to see the whole history the whole storage layout I don't think you can do it probably because there's no getting list of transactions that every cement that never what do you mean with you can't know the keys because if you only look at storage try and you iterate that what you get is not the key but the hash of the key another question I'm using ABI included v2 for my contract and it's still experimental right what are the problems I have and they have the when you can find them you want to go ahead yeah so the main reason why ABI included v2 hasn't been properly released yet as non-experimental is that we're still fuzzing it a lot and the fuzzer wasn't prepared fully to fuzz all the complicated cases for the ABI included v2 because there's a lot of old scenarios so that's the main reason basically and but for the next written release 060 it's still going to be experimental but at least it's not going to issue the warning anymore that it shouldn't because it's experimental and maybe 070 it's something experimental and it's default hopefully there was just a talk about that fuzzing effort maybe at 1.0 1.0 if there are any other questions I would suggest maybe any other questions I can't cover all the questions but are you guys interested in that or do you guys want to have a coffee break and then we can have a talk after since Nick Johnson is in here I feel like this is a safe place to talk about gas, stipend it's actually a very good idea I would really say coffee I mean if it's a good idea to have it I mean if it's a good idea to by default have contracts be forced to execute unknown code what they want to do a simple value transfer so like from my perspective I truly horribly hate this one because whenever I implement a course in EVM I do it wrong and I did it like 10 times already and every time I do it all the checks and all the conditions have to be precise in the strict order and the stipend cuts like double all of these so yeah from inside EVM I would be super happy if it can remove that but I'll show you that where is the strict order defined can it go to EVM? I don't know, maybe yellow paper but I never read it it doesn't specify when the stipend is added like when you actually check the gas and like if it's there are a lot of conditions in the call and I think there were cases where you could do them in a different order and the test suite was going to get the same successful result I think it's fixed by at least the test suite because it was generated out of one of the implementations and like some of them actually adjusted that maybe that's the case, I'm not sure but anyway some context around this so there's one group of people who thinks that there's a problem that I cannot send money to a contract and prevent it from executing code I cannot send it zero gas and some cash and then there is another group of people who think that any time you receive Ether it's really awesome that you can execute code and it should always be given full gas to do whatever it wants right now it's being given 200-300 if transfers use there's kind of a middle ground where people say there should be it should be allowed to execute code but it should not be able to do state modifications and that's kind of where this 2300 comes from the current gas statement because that was sufficient to do a log operation or two and maybe an S load but not actually do an estate modification and right now this 2300 thing is a kind of ugly hack to to allow a bit of execution but not too much execution not allow a state modification so there are different ways that this discussion goes like one group maybe wants to have a special call that don't send money and do nothing a static transfer right now in the middle group might want to have a static call with value and logs so the receiver can consume infinite number infinite gas and really only do like aerobatics and log operations not logified state I think that before we discuss which one may have issues or is good what is the reason people want to have code executed when they receive and there are two reasons people want that today and the first one is to guarantee it to be possible right? yeah to have the code executed and one of the main reasons people want that they want to reject incoming transfers if the contract is not supposed to store money because if you don't do that and it stores money and you don't have a wage retrieved then it just lost the second thing people want to do is have a log and be able to catch that that the contract received but it's still possible to nuke people with money if you use the self-destruction in the case but it's very much explicit that right? so like you wanted to photograph but still you have the option of doing it through this method then it's clearly now up to the calling contract to choose whether the the contract you're calling it should have the ability to execute a code or not and then it seems like this gas stipend doesn't still serve a purpose because it doesn't enforce the invariant that the contract you're calling into can always execute code because there is this case where it cannot yeah that's the case to work around yeah that's true but it will just overload nobody said TVM is consistent so that's a safety feature and you can probably analyze the chain history and see how many times that's kept people from burning their Ether by sending it to a contract attempting to send to a contract that wasn't supposed to receive it I wonder how many people how much Ether it saved probably quite a lot I'm wondering that this is like solid Ether but from doing stupid things it's something that they can manage like you're always right stupid TVM code yeah that's actually I mean that actually allows compilers to do that to reject it otherwise it couldn't prevent it in like some of the case which are actually doable now there's the payable is it rejecting incoming incoming passwords yeah default so if you don't have default you can specify payable default or that's a fallback in any environment except today so it's going to be I've heard in the past in regards to that some people propose I don't think this would happen at this point but propose that Ether like the native Ether be re-implemented as an ERC20 so that we wouldn't have this like kind of one asset that's a special case from like an EVM developer's perspective is there any reason that Ether does have to be a special native thing or is it theoretically possible that it could be an ERC20 contract without a crazy amount of change to the Ether? I don't know if I can't argue with that we could also just implement the ERC20 on the event I think yeah wrapped wrapped Ether I don't think that's a way around to be honest but for what the original vision was for leaving I would say there are like two very different points to the discussion what is the ERC20 stuff so with what people do is wrapped Ether and you can actually write your contract in a way you only use the ERC20 interface reject all Ether transfers and then it's a UX issue that you require your users to use wrapped Ether on top of that it also has an extra cost and why do you want to pay that cost if Ether is a built-in token so for that there was one proposal that maybe the wrapped Ether should be kind of a standard system contract and it would be practically free to deal with that and users wouldn't need to mentally transfer the Ether into this wrapped Ether contract you could just through this extra contract you could handle your Ether as a bitware ERC20 but it wouldn't affect everything else and I think that might be a good like workaround but the other end is the beginning of your question why is Ether this native thing cannot you just make it more flexible and use other things there was a proposal a long while ago for account abstraction to it was just the first step in the process to getting rid of Ether as like this native thing in theory you could use other things to pay for execution but that is a really long process and it stopped at the first step and never got in the past so to be honest I don't think it will happen but a lot of us focus on what we're building it's basically the base layer and it's actually more of a UX issue like if you don't want your user to use wrapped Ether or whatever you can build tools around it what are UX that they don't have and with stuff like Ether transactions and all sorts of things you can get quite forward not even now you can see that Martin what do you think about this idea this special contract which gives an ERC-20 kinder of a score but basically you wouldn't need to transfer your money into it because it could it could handle I don't know I need to think about it some more to say about it I don't know what is the view I don't like this idea of having like special like addresses I mean like on technical point of view like you would need like a map of these maintained and they will differ on different hard forms right pre-compiled like those two yeah at least pre-compiled has like like more or less known the others range and now we have but I understand we will not like or you won't like deploy it or give an address or like you just deploy it regularly and just mark this one as a subsidized somehow or it's like too technical and it's not a point to go there this one would be probably a pre-compiled because it has to have control over the users oh okay yeah more than pre-compilers pre-compiled there was going to be a block hash right this is the contract but it never I I want to bring back a very very old question why pre-compiles and non-compilers maybe do you want to just finish this this one no I was done we don't have to why is it that like inside the EVM I can't look back more than 256 block hashes because like clients wouldn't be able to execute stuff I think it's the main reason I mean it's a pretty random number though yeah that doesn't pretty much but it had to use the live clients as far as I know download old headers but doesn't verify everyone maybe like one in every one thousand twenty four or something just a random number just yeah it could be a lot a lot better yeah so your question on why not obfiles and why pre-compiles I think even the the initial set of pre-compiles like the identity share 256 along with catch up all of those were obfiles and then those except catch up were moved out to this new concept of pre-compiles and catch up I believe was left because it would be more often used but maybe I don't think you were right like except obviously the address base issue of like the outcomes but how many pre-compiles do we have around nine the benefit having them as outcomes well we only have outcomes we don't have to like write additional code so so for me like exhausting the outcome space it's not how it it might be actually a deal because it might be very difficult to have to buy that would be a pain yeah that was like the serious like problems with that but it also brings the question is like why do we rank pre-compiles like where do we draw that line I don't know on this point I'm not like very much concerned but I so far I believe like that's not the issue at all but I think I was correct that it actually is so we still have like one bind to use but well there's a way to work around it this a bit angry way but like from like API I mean like VM implementation so actually it's actually easy for me to implement pre-compiles in VM because I don't have to do it at all because like it's another call so like the client has to handle that somehow so like the client will not I will just inform the client there's a call to handle and the client will figure out it's actually pre-compiled less so the call of the pre-compile is on the client side not on the VM side it's like in the API I'm using and I actually need to have like catrack implementation in like ship to the EVM I mean yeah but that's arbitrary it's kind of awesome and the advantage is like you have to do it again is like you can ship a single CVM and we can integrate it into multiple clients so it's like EVMC would be just like a single module and we know it I actually prefer like smaller modules but yeah yeah it's like the fixed boundary like we cannot cross that much it's just a question I wasn't 100% sure like how do we know we should make a call code how do we know we should make a list about the opcode space there's this interesting ERP old one from Gavin to instead of having like yeah exactly instead of having you know 16 different types of calls you have one and then it becomes a parameter what kind of call you want so that would reduce the you know opcode opcode space and it would go into the same problems and the list is finding out what kind of a call this could be so yeah I'm kind of like I'm okay with adding more calls it doesn't have to a weird way of passing parameters to that so while you were talking about EVM 1 there was a question whether there are any other optimization students apart from what you explained in the morning um yeah the number of them but they're kind of like like regular engineering jobs so this I don't know if I can even list that but yeah preallocate stack that is like obvious one so I think these are like two two things that is worth mentioning and the rest is like playing with the code micro benchmark changing something and see if it's faster or not so yeah you can actually read shape log because I try to list them when they appear so I cannot mention anything right now so Martin since you're here I'm not sure if you have seen a table sitting along or even an optimization yeah what is goiterium using from those techniques so I think as far as I understand the main cool thing is that he does a bit of look ahead and walking in the paths and calculating the gas yeah that's pretty cool it's yeah we can simplify the gas costs of the block of instructions yeah but you can also pre-calculate the stack requirements so I thought it's the most important one but actually it's not so I think it's good news like the integer implementation like we actually changed the integer implementation in i from boost to index and it's like three times faster now so I think that's not a controversial change right it's not an optimization like your work can go on that Martin are you writing a new bigger material? I did yes this is definitely faster than the bigger library but in practice it doesn't make a difference didn't make a difference I mean I mean even if we make 10 times faster if the arithmetic if the contribution of the arithmetic ups to the actual execution time is only this portion it doesn't matter this is 10 times faster so you were benchmarking it against like the full same right but if you run a numerical benchmark just of computational app codes you would see a big speed up yes and if you would apply those changes and hypothetically we could reprise those as we discussed it a few times already then could we get rid of some of the precompiles in the back that way do we want to get rid of the precompiles and then now that was the thing well I guess we can actually get rid of them but you could stop new ones I mean I thought not having precompiles was quite a big thing that we've discussed I think length but not in this discussion but ideally you want people to be able to write a new crypto function and not have to rely on the client devs to integrate it for you yeah they're against the Ethereum classification against what places it's a covenant against what precompiles they go against the Ethereum yes it's a bailout for somebody's hash function yeah it's unfortunately like that but then I also sometimes feel like we're building almost like a CPU and CPUs have specific things like that like you know some instructions say it's that you can make a variable compared to that as well so I have like a polarizing defeat that's how you get here on the day so the main problem with one of the problems with precompiles is people just want to have these features and now they have to to wait so they cannot do many of these things on the again today now they have to wait for somebody to propose it and be lucky to be accepted and then go into a hard fork at some point and be unsure whether it goes into the hard fork two weeks before the hard fork happens yeah but Casey you made some benchmarks and I think you made some benchmarks for some existing precompiles maybe some proposed ones but are there any ones which with like EVM1 are there any ones which could be just done on EVM with EVM1 without having those precompiles whether in such cases well yeah I would like to be definitely do it on EVM1 if it was well I mean the thing is we have to be careful about how we're metering you know those new outputs and so we don't have the studies done to show that we can safely meter them and get the full speed up so that it's going to depend on that but if we're optimistic then yeah I think it's benchmark show would be sufficient to replace the precompile but I'm not sure I don't know if I'm stating correctly but what you meant to ask was did you in a way did you check where the gas went in that incremental principle that if the gas went to computational ones or if it went into the memory shopping or data and the handling of it no we haven't done any processes we just did the runtime on EVM1 and said okay well this is way less than 100 milliseconds so it shouldn't cost less than handling gas or whatever but that assumes we can optimistically price outcodes and that there are worst case runtimes in EVM1 somewhere where we have to be more conservative with how we price them time limit actually do you have a short question? yeah I have a very short question just to play with the benchmark issue how do you guys have control for other things in the effective performance so things like say like machines power profiles windows performance of the things like the ECG core parking and things like that and then other programs are running at the same time and all those things related to dynamic power so I can explain how I do it but it's not actually a short answer but yeah well I'm running on the same machine right and there is a way to actually I just restart machine and there are only this one process of benchmarking you can also clean some cores I mean at least in Linux you can separate some cores and around process only on this cores you can ask kernel to move everything other tasks to other cores and this I think a lot of EVM documentation and a page documentation that describes many of these tricks you can disable turbo frequencies sometimes on the like base one it form a laptop to make it like 4 times slower but still like relevant like we mostly interested in like differences right and there are like many other things but I also like check the variation or like standard deviation of that which like mental reports and like even if running with like browser open it gets maybe around 1% but with all these tricks and some of them actually doesn't matter from my experience it goes I mean I'll show how to I have trouble interpreting this standard variation but it's like I think 1 well it's 1,000 times slower than 30 values so I think it's more pretty stable I don't like EVM benchmarking benchmarking is difficult a lot of going on I think it's on the boundary of micro benchmarking because this memory use it and memory allocations happening so there are some other different things but in general if you have EVM code that runs long enough I think it's good good measurement and that's the easy case the hard case is when you start dealing with the IO operations yeah that's I'm not like benchmarking at all so thank you for all discussion let's finish this session