 Hello Christian. Hi, hello. Hello, this is Dimitri, could you hear me? Yes, hi Dimitri, how are you doing? Yeah, fine, just finished my first dub development. Oh, nice, what did you make? A contract for making polls and voting for polls. Oh, nice. Very nice. Good, just so we understand. So this call is being recorded and then you post it after on Reddit, but it's not live or is it also live? It is not live. I record this and then I cut the recording, I take notes and then I post the recording on YouTube and the notes on GitHub simultaneously. Oh, so you have to re-upload it to YouTube? I thought it just went automatically? No, so I'm using Hangouts, not... What's the other one? There's another Google one that's like Hangouts, but it's called Hangouts Live. That's the one where you can broadcast it to people. But no, yeah, this is the one I just record myself. Okay, yeah, I never understood the difference between... I get confused also. Yeah, there needs to be a guide. Say it one more time. You have to edit the calls later, Mary? Not generally, so the only reason would be if I have a Skype message pop up or something and I have to edit the video, or last time someone's audio is messing up and it made the volume spike like an incredible amount, so I just muted that part so it wouldn't kill everyone's ears. Is there a reason that we use normal Hangouts and not live Hangouts? Just so I understand. Sure, so like the normal Hangouts, the reason for us using that is that there's a... When I start a Hangout from the Ethereum foundation at ethereum.org account, it allows up to 25 people and that's the only free solution for video conferencing that I found. I don't think you can do that on Hangouts Live, but I'll look into that because that would be interesting because then we could have these live with the community and stuff. Okay, just one is enough. Yeah, I think that'd be cool. Hey, Casey, can you hear us? Yep, I hear you. Awesome. Okay, so I think we have pretty much everyone who's going to be here. So let's get started. Let me pull up the notes. Cool, so agenda item number one is the resolution from last week when we were talking about static call, peer call, revert opcode, dynamic return, and all that stuff. And the parties who are interested are in a Skype conversation that's been going on for the last three weeks, and it looks like what we're going for. I'm going to say this and I'm going to have Christian and Alex or Aksik and a few other people correct me. I think we're going with return data copy slash size and static call but not peer call. Is that correct? Yeah, that's how I also understand it. The exact details of all this, I'm not sure if those were decided yet. Yeah, I think we also have preferred. Oh, revert, yep, that's right. So revert's going into, yeah, this is more to decide which ones, which EIPs are going to be said to be going into metropolis and which EIPs can be put in the status of superseded or withdrawn. So I'll be handling that or Casey and I both will be changing those statuses later. Okay, so is there any other comments on that right now or is that one of those that can be fleshed out a little bit more in the EIPs? I think a lot of that's already there. I think a couple things people were having questions about. I think Nick brought up something with call data and metropolis but I don't know if that's related to it. So I don't fully understand where the acceptance come from. I've seen a message from Italic on the Skype channel that he wants this but does it mean that it's full developers acceptance from other teams? Yeah, so on the Skype chat, I basically asked for a summary because the message you're referring to is the one Italic sent within the last 10 hours and I basically asked for a summary and the reason Italic was giving his opinion was he wasn't going to be here this morning. He's like doing a talk or something and neither is Jeff. So what I got from that was he was just giving a summary. It's not like a for sure thing but everybody who had opinions on this last time had been talking it out over the past three weeks and I saw Italic, Nick, Jeff, Martin, and Alex Bergzazi agree to it but it's totally still up for debate if anyone had any other opinions. Okay, thank you. Yeah, that's a good question though. Kind of how things get accepted because someone asked that recently and my view on it because it's still kind of organic is that the people who care about it, if they come to a decision on it and they're stakeholders in either the change or the intelligence required to make the decision on the change that whenever they come to agreement, that's when it happens that if there's a stalemate, there can be other things like signaling methods like carbon vote or other things going on that can indicate breaking a stalemate if it's like a big community decision or other things we're developing. So yeah, it's kind of just case by case basis but this is such like a super nerdy high level thing. I was just like these six people probably know what they're talking about. Or I should say the low level thing. All right, cool. So that's pretty much settled. So I still have a question or comment or what we would like to get opinions about. So at some point it was suggested to clear the return data on the first memory resize. Is there anyone who could give a reason for that or are there opinions on why this should be done? Who has raised this idea? Gavin has raised that. Was it in the Skype channel? It's a comment on the poll request, I think. So I cannot like answer right now but I can do some investigation and try to respond to that in the comments but it's hard for me to tell it if that has any benefits at the moment. I think Nick's joining and can maybe shed some light on that but let's do this. Let's go to item number two and then go back to item number one because item number two is short and just an update. Then we'll go back to item number one with some more detail from Christian's question. So yeah, Peter, if you could just run through just a quick update of click and rink B. Sure thing. Last time you agreed that click seems to be a good approach to make a simple enough proof of authority chain that we can basically implement cross client. Since then, one of the problems with deploying rink B or deploying it basically on the test account is that it's kind of a huge hassle and one of the things that we... So it means the stats page book knows whatever those are kind of easy to do. One thing that we kind of relied on the community until now is to provide for example a good faucet and of course without that the proof of authority test net is worth zero unless somebody is just staying at their computer and manually sending out ethers. So since then basically we've been working on a small tool to actually help deploy these private networks including rink B and actually we also implanted a faucet based on a light client and GitHub authenticated faucet so that anyone with a GitHub account can request funds. And with that I think more or less the click and rink B work from the Go Ethereum team's perspective is kind of ready. I've mostly writing up some tests. I found some corner cases. I added documentation to the EIP and just for the reference we are kind of planning to release the next version of Go Ethereum next week and we figured that saying that hey this version contains rink B and this is a new test net and whatever that might be a bit pushing it since the whole thing never went through a proper field test so we're aiming to release kind of an Olympic version of rink B so basically we provide the guide on how anyone could connect to it and they can play with the fossil, play with the sinus plays with whatever and then if things go according to plan then we can say that okay this remains rink B whereas if something blows up then at least we'll have a more or less somewhat disclaimer that yeah we knew that since we got the only implantation there might be some unforeseen issues. It would be really nice if soon or later we could also add some other implantations there to validate that our code is actually correct but that's kind of the status. Great thanks Peter. Is there any other implementations in the call that have an update on sorry not tired this morning any other Ethereum clients in the room who have an update on implementing click or plans to implement it? So CPP Ethereum is currently focusing on Metropolis oh yeah I think yeah I think there's not like an expectation of CPP Ethereum to do it because yeah you're doing Metro and yeah that's is also is CPP Ethereum does it already work with other other non proof of work instances of Ethereum? So we have some kind of plug-able consensus mechanism but that's not, I mean it's in use for testing solidity but not in a networked way. Okay sounds good. So I guess it's not hard to implement that. Yeah just priorities no problem. Okay awesome Peter and the next steps for the whole rink B and click EIP is to move it from an issue to a PR so now that the edge case has been identified it's already pretty much written it's just moving it literally from the issue section to the PR section and then having some of the editors just check it over one more time before we approve it since there hasn't really been anyone from the community who says it shouldn't go through and any stakeholders have said that you know it's good to go. Okay great so Nick's joined. Welcome Nick and Christian if you don't mind repeating your question because I think Nick had mentioned it in the Skype chat. So Gavin suggested to clear the return data buffer when memory is resized. What is the exact reason for that and do we really need it? I think the idea is to minimise the maximum memory consumption otherwise if you do a call which allocates return data buffer and then do something that expands memory the total memory consumption can be higher than it would have been under the current regime. This would also allow you to implement an implementation to have contiguous memory for the entire call stack rather than ever to allocate a separate chunk for each contract. Personally I think it would be fine as long as it's defined to erase it after expansion so you can copy return data into newly expanded memory. Yeah I mean if it's possible I would really like to avoid that because it kind of complicates quite some things. Which things does it complicate? I mean I don't know in an optimiser for example you have to take care not to change the order of anything that enlarges memory and anything that reads from the return data. I mean you can yeah or it basically forces you to copy it out as the first thing you do after the call. I think personally I think it's okay to do that. I think that's a good pattern because you already effectively have a barrier on where you can reorganise things because you can't reorganise it after another call. And it also complicates static analysis because determining whether memory actually increases is quite a complicated condition. Isn't it only ever expanded by in-store and a couple of copy operations? Anything that accesses memory might expand it but you basically have to keep track of the current size of memory. And you might not always know that. You're talking from the perspective of a compiler, right? No, a static analyser. Okay. But... Oh, real quick. Which EIP is this? Because Arkady just said he added comments to the PR. I can't recall the number of hate into anyone else. Okay, it looks like Martin posted it. Okay, it's in chat here. Okay, and I think Arkady's joining too, so he might be able to provide some idea from the parody side of things, what the idea was. Oh, okay, it looks like 211. Yeah, it looks like basically what Nick said. I'm just reading through it right now. So it increases the peak allocation more than what's to be done today. So, I'm really tired. Is it actually 1400 UTC? Or, like, after that time? Okay, perfect, okay. Because, yeah, someone was asking, like, oh, is it already? And I think, yeah, Martin Beesey's joining as well. Google. Google tells me it's 1324 UTC time right now. Oh, it's 1324? Google says so. Yep, sorry, UTC it's 1324. London time it's 1424. Have I been putting the wrong time zone down? Let me look. Yeah, we are in GMT plus one at the moment. Okay, that could be causing confusion. I will have to change that. Sorry, everybody. Yeah, because I'm looking now at my notes and it says 1400 UTC and oh, that's 9 a.m. my time, not 8 a.m. Ah, okay, so I, this call started an hour early. That would explain why people weren't showing up. Well, anyways, everyone mostly is here now. So, back to, I guess, till Arkady gets in here. Let me go back to the agenda. Yeah, schedule calls by block number. Yeah, we could do that. Oh, item number three is EIP 186. I haven't really looked into this, but it seemed to be getting some community I guess some community hype around it reduce the issuance of ether before proof of stake. Does anyone have any comments on this? There's a carbon vote for it going on that I I don't really understand where this came from. I saw Vlad's medium post about it and I read it and my opinion is that he posted this as just an idea out there and then everyone started taking it really seriously and my opinion is that unless we have a very serious reason that the system is broken and it should we should reduce the issuance, we should not be playing with it unless we also have like a very clear algorithm on when to reduce it or when to increase it. Otherwise we risk making a very uncontroversial update into a controversial hard fork just because we are playing what I think it's economic policy and I don't think we should like thinking economic policy just because so that's my opinion and anyone has so does anyone here has any support for this or was thinking about taking that seriously how is the feeling of the room? I mean I don't particularly support this but I'm curious as to how large the community support is about it. Yeah I'm pulling up CarbonVote to kind of see not that CarbonVote in this case is really like a huge indicator CarbonVote is 99% for it, it has a million meter behind it but honestly I'm not sure how many people have taken that vote seriously. I know that I didn't vote I mean I wasn't actually thinking it was people were taking that seriously so I'm not sure how I think CarbonVote measures what some people actually understand and I know it first appeared during the Dow vote so that's sort of why people are taking that sort of seriously. Yeah I mean so yeah I agree Alex I think the idea behind it was that as proof of stake gets closer there's going to be reduced incentives for minors so there's going to be a slower responsiveness in the system if there isn't things done to change the issuance rate I guess and that's just purely for I guess purely to help the minors you know have incentive to continue to mine which I don't see as a problem the second but I guess it'd be a good idea to at least have a plan if that becomes a problem wait reducing the minor reward sorry not reducing the minor reducing incentives sorry reducing incentives for minors and so as to facilitate the adaption to the POS hard fork so it says let's see so yeah basically to forget everything I just said I don't understand what 186 is about my understanding was that the idea was to cut off the the Ice Age but reduce issuance if it's actually proportional to what the Ice Age had was still happening because that's all the minors have been promised effectively yeah so and promised by that you mean like within road maps or whatever people have put out as road maps or more just like codified sorry I don't understand the question you were saying like compared to what the minors have been promised you mean like promised in a traditional sense like someone making them a promise or like promise like that's how the system set up so that's how it's expected to run yeah I mean in code so if the fork doesn't happen then minors income will reduce anyway because of the Ice Age and the suggestion is that if we're going to fork to postpone the Ice Age we should include some reduction in minor reward which they would have had anyway oh I see okay any other opinions in the room on that to be clear that's not necessarily my opinion I'm just I think that's the idea behind it oh yeah I generally think that it's not a bad idea as long as it's equal to or greater than the amount they would have got with the Ice Age because at that point there's no they're not likely to oppose the fork because they'll be worse off if they do well if if the switch for proof of work happens one year after after Metropolis let's say let's say it happens in 2018 or one year and a half after that does it make such a big difference on on them I feel it wouldn't make such a big difference on the amount of whatever the minors would get so that's why I'm saying that it's making something that it might make this this hard fork controversial work yeah I guess it's one of those like and it might not even be a question of if it should go in but maybe what should it even go in in Metropolis or should it be thought about for a different hard fork yeah I mean am I wrong that it seems that no one in this room has any like strong opinions for or against it it feels that it's something that came sort of from blood now the community sort of asking around and it's not something that anyone really wants is it or am I wrong I would say that the very interested in it are ones from a side of the Ethereum developer community that deals with a lot of the I guess economic policy and thought so researchers and a lot of the researchers are in Malta right now so they're not on the call Vitalik's chimed in a few times on this I think his opinion was kind of this probably doesn't need to happen right now but and don't quote me on this I need to look it up again but last I read it was something like if there's enough support it's worth considering but otherwise he didn't really see the need for it this second was last I read so yeah it sounds like I mean yeah this isn't going to be decided in this call but I think the things I wanted to get out was if there was anybody who had initial thoughts so thanks Alex and Nick and everyone the other thing is if something like this were to go in like from a technical perspective what would this be like a really simple change to when I say simple I don't mean simple to decide but simple to change in code I think yeah I think that the changing code is is super simple but I kind of agree with Alex that even though like the the summary of all incomes will be greater it will be hard to defend that in social media and places like that because what will people will complain about is like that's changing the word from 5 to 4 and so on and that would that would be probably the news titles so it depends if you care about that but I can like my perspective this is how it will go and yeah I agree it will make the less controversial hard work to very controversial one I don't so yeah I think that the other argument is that if you are going to postpone the ISH basically that means that we are raising we are giving people more money by postponing the ISH so we yeah I understand that but we all know how it works but that would be much harder to explain yeah you are talking more from a so it's more in the grand scheme of things is it better to implement this based on some possibilities that we are not sure yet but then face the backlash of a more controversial hard fork and a lot of the complications that come from putting controversial things in a hard fork and maybe even setting a precedence of changing a lot of the core economic things just based off of people would look at it as us changing these core economic things on a whim yeah I agree moving the ISH is already changing core economics on a whim yeah that's a little bit more but the balance there is that if it's not changed then the block times would eventually get to the point where they would be interfering with normal people's work with Ethereum something like this isn't going to change how transactions go through the ISH was never meant to be taken into production in the sense that the ISH has always been a deterrent for a fork people never took it seriously that we wanted to have that the system working during the ISH the whole point of the ISH was always at this point we will need to do a fork otherwise the system will stop we are not seriously thinking that Ethereum is supposed to have a 10-minute block time in the future that's sort of ridiculous the ISH has always been we need to do a fork by this time and the idea was that proof of stake would be ready by then but then it's not the economic policy of the ISH should not really count my real fear is that it seems that we are taking something that isn't even something that the community wants and we are sort of creating a discussion and giving official status of it because simply people are asking so do you want that? I don't know and it sort of creates this fake community support where I don't really see any but maybe am wrong I think it's a good thing that we talk about this it shows that we actually do listen to the community because this is obviously it doesn't come from us it comes from the community so it's worth talking about it's like super obvious that the people in this call haven't even looked or haven't been following it as closely as people in the community so I agree Martin that is good and Alex I would say that there are issues like that this one in particular the EIP has been there since like December or earlier so well before the price increase and well before there in my opinion would have been incentive for something like this to come from like a minor someone else and the ISH was the ISH was brought sooner in part because of the price increase or as a side effect of the price increase so I think that the intentions behind it are good so it's good to talk about and if it comes down to you know the clients not wanting to implement it or you know broadly deciding not to that's good if there's a huge community push we can come back and address this it will bring this up next meeting if anyone else has comments on it particularly people who've commented on it before like potentially Yoichi or Vitalik Hudson are we changing the block number for the ISH in Metropolis does someone else know the answer to that I forgot what we if we had talked about that so the idea was that we kind of add a special root of the difficulty calculation so that for example blocks 4 million to whatever million don't increase the difficulty so it's kind of like a pause so we already change it in Metropolis well the alternative is to roll out a second hard fork immediately around Metropolis and I'm not sure that's a good idea to do to forks one after the other yeah just about the ISH well there's really no point in so if so if the ISH hard fork is really just setting to an extra rule for the difficulty calculation I don't really see why it could warrant its own fork why it can't be just added to Metropolis yeah I don't think at least from my notes from last meeting I thought we or two meetings ago I thought we said we were just having one hard fork for Metropolis so this block number thing would be included in it kind of the issuance reducing the issuance is linked to the ISH and if you move if you reduce the difficulty of the ISH then kind of you give more money so it would be the same discussion as the reducing the issuance oh yeah no I agree with you on that I'm talking about Peter saying the two hard forks thing I think that the pausing of the ISH would be a single hard fork I think it's been discussed before so there shouldn't be complications unless I'm wrong but yeah on your point Pavel I that would be something that would have to be in the Metropolis fork for it to take effect you know after the ISH is paused it's just a temporary pause right the ISH is set to restart after after the four at some point after four to re-force for the future am I correct we are just sort of delaying the ISH then yes yes the ISH is supposed to be just moved to the future but not removed entirely yeah and my understanding was that as we got closer and we got updates from the research community about where they were on proof of stake which the last like a headline I saw was Vitalik at some meeting saying they're like 75% done but I mean that was just a headline I saw so no idea but then we'll know kind of what it should be paused to to give us some breathing room one last thing about this the IEP the first paragraph about the IEP is throwing terms like price supportive and increase investments so I think it's very obvious from that IEP from the fact that you're discussing it now after a price increase that this is more about price than technical reasons and therefore we should not be even like considering it so that's my opinion oh yeah are you reading the original IEP or the modified IEP the original one I'm not sure I'm seeing the one you linked oh so on the one I linked so basically light upon light they modified it with changes based on feedback but if you scroll down the original one from December it says original IEP proposal below and it might have the same stuff I'm just okay so I'm reading before that's the modified so say just abstract the reduction of the issuance is very likely to be price supportive oh yeah yeah so I see that as as a so the let me try to put it this way the oh oh oh I see what you're reading okay yeah there definitely is a lot of non-technical words in there that try to bring up more of an economic argument rather than a technical thing yeah so that's another reason not to support it yeah interesting well I mean to be fair though when you have an EIP and you enter it as an issue it doesn't need to be technical it can just be a spirit of you know what change you want to happen but it's definitely something where something that is more technically sound is going to be viewed or not technically sound I should say more technically written rather than written on the terms of economics is going to be easier for us to read first so yeah I think at this point it's pretty clear that there's not going to be any changes happening right the second and there's no decisions at the second but so what that means is nothing's going to happen unless there's a push from um you know some client devs or core devs or someone really wanting to make this change happen because I haven't heard anyone who's super in favor of this in this call great so um Arkady, Frankie and Martin Beesey welcome um so Christian had a question on EIP 211 about extending memory and Arkady I saw you posted a comment in there sorry for putting the wrong time I got my time zones mixed up so thanks everyone for being flexible with that so Arkady if you could expand on what you commented at the end of 211 just to kind of give your perspective on why um this is uh this should be done well so the concern is that the EIP increases maximum memory consumption and it's basically explained in the comment if you have to keep return data around from the call that you've made but if you allocate memory if you expand memory before reading return data you would get higher peak memory usage basically Arkady can we uh perhaps have a call after this call to see whether there are any workarounds any other workarounds yeah sure not that much showstopper or deal breaker but just something to be concerned about yeah so having a separate call would be good and also just talking about it at the EIP so it sounds like uh yeah if anyone is interested in doing that just ping Christian and have him loop you in on whatever conversation happens so please do include me um for what it's worth I don't think this allows an attacker to allocate more memory for the same guess uh as previously but it does mean that some ordinary cases will allocate more memory than they did previously okay interesting cool any other comments on that alright great so we've gone through yep we've gone through that we've gone through 186 uh so the last item is uh and then any other items people want to add after that but the last item on the agenda is metropolis updates so um let's just start with um Parity uh Arcity if you could give us an update on where Parity is at uh implementing those and then also you know actually let's not do that let's start with Dimitri uh Dimitri if you could give us an overview of how the tests are going and also um give us a little bit of insight about how we can help um as far as either sending you you know whatever data or information you need to help make these tests better so to make the test better it would be really helpful if all of the EIPs that we um already considered to be implemented for sure so this EIP should be market or flagged with some label on the Dithub so I could add this filter and see which EIPs are already um considered valid because right now I see lots of EIPs and many different versions of the same EIP and yeah that's kind of um yeah I hear you on that so if you go to the readme page we have something called accepted EIPs chart and EIPs under consideration and that's going to be cleaned up after this call because the EIPs under consideration are going to be consolidated or the status is changed to show the latest data but Casey and I also talked about last time having a column for which I guess is going to metropolis or if it's just accepted EIP just but non-hard fork change EIP uh so would that be sufficient or is there something more fine-grained you would need from that on the EIPs repo I think we should use a label to um distinguish the EIPs that should be implemented as for metropolis changes okay great yeah we can also add a label that's really easy to do you mean like a get hub label right yeah to the issues for the CIPs like I saw there is a metropolis label and one cause issue on that I don't know whether it was cancelled or implemented I don't know yeah no problem we could have the meta EIP listing all the issues for the hard fork yeah you uh Alex you actually have an EIP about that right yeah okay yeah that's a really good idea I think we should do that um I'm gonna put that on my notes uh to do that like so Alex's idea and let me know if I'm saying this wrong Alex is uh yeah just having an EIP that lists all of the stuff that's going into metro and then changing it as it goes uh yeah my book as well and as for the implementation we already did revert code and the test for that already ready and I saw that some clients started to implement general state tests um it's now optional and if you want to be updated uh with the recent tests and you'll probably want to implement general state tests and as tests will be completed I will convert all of the general status into ordinary blockchain tests like I already did for the revert of code test and then I guess Martin's vendor he will be running those blockchain tests on Hive and every client that implemented this RPC protocol of importing books will be tested through Hive yeah but not every test is converted to that yet and then general state tests um most updated ones okay great that sounds awesome and uh what is the best way to communicate with you in order to um ask you questions or send you test or data it's Gitter and Skype okay great so um if you could yeah actually I'll just like send this on the all core devs channel your info after you can paste it in there yourself whichever one um yeah so that sounds good great uh any other updates Dimitri on that end okay then transaction tests and they're done halfway and we have two different transaction tests one of them will be implemented in state tests for state transition on this zero signature transaction and other tests are just checking transaction fields and that one already been updated to the uh yeah and one one more update and now we using this branch on the test repository and uh every particular change will be implemented in a separate branch and uh I'll try to post a link to this and for the metropolis tests I posted the google document um on the Skype test channel and uh I guess yeah we'll try to find it right now and post it on our chat as well there I describe the test cases that already implemented and I want so other people to review that document and perhaps um printed some ideas posted comments maybe some new cases that they take in mind as they have in mind and uh this would help us to create better test coverage also in that document I put a link to the test repository branch and the test sources and the compile test as well. Alright awesome yeah that sounds really well organized and uh it's gonna be really helpful for doing all this stuff okay great yeah and I see you posted the link so we'll get that distributed to the different clients um I'll I'll try to make sure everyone gets that and I mean everyone in the all core devs channel should get it anyway um anyone have questions for Dmitri any of the client devs. Alright cool so let's just run through some of the clients real quick get an update on uh where they're at uh so Arcady uh Parity any uh updates issues comments so we have most of the EIPs that we consider already so seven out of ten and yeah just waiting on finalization for our jump data size EIP and there is couple of minor ones left but otherwise we are good I think alright great and we're getting that cleared up um real soon we we made a lot of that um those decisions earlier today for the return data size and that kind of stuff except for some of the nuance or the minor things within them that we at least know which EIPs are going in and which ones aren't um and then let's see uh C++ so I think I guess uh oh Christian yeah uh I don't know Pablo Andre can you report there please oh he might have walked away yeah sorry um yeah yeah I think we are somewhere in the middle I would say so I have not prepared exact data but uh some of EIPs are pretty much finished some are there are merge ready and I think two or three are uh it's around at the beginning okay sounds great um and then uh go I guess that be Peter yep so um basically uh Jeff started working on the Metropolis um Metropolis EIPs he kind of he wasn't feeling too good in the last few weeks so uh as far as I know he did some work he pushed some PRs but I cannot basically I don't know uh exactly where he's at the rest of the team was mostly is he preparing the next release so we didn't work on the PRs ourselves I mean the EIPs ourselves okay actually I saw a message from him earlier I just remembered uh that said they're pretty much all done uh on the geth end except for like like um Arkady mentioned the ones that aren't necessarily finalized or like officially like we know that are officially going in or not um so okay cool and then uh finally is there any other clients in here uh Conrad are you still working on Pyatherium or are you primarily with the Python group oh Conrad you're unmuted but we can't hear you it sounds like you're really far away he might be having microphone problems does anyone know so um which client does Conrad work on generally I'm just blanking right now okay I'm looking at Python oh okay great yeah uh currently mostly concerned with getting back to mainchain okay sounds great cool I think that covers all the clients um so yeah uh as far as overall metropolis goes I think and I just want to hear some opinions on this I think by next all core dev meeting we should be pretty much giving like the final final on what EIPs are going in or not just so that we can start hammering down on test or maybe it's um still too early to do that uh what's everyone's opinion on that are we kind of wrapping up with what's gonna go in or not the next one yeah because if we were shooting for something like end of June um which has been talked about before that's gonna be how many months away oh wow yeah that's like basically three months away a little less than that so yeah and if we're gonna do the test and stuff we should probably have things kind of finalized by next all core dev meeting I'd say so yeah I can't really enforce that but uh that's my opinion and next all core dev meeting I'll try to make sure we can get that done um let's see any other comments on metropolis stuff cool oh Christian I think I just saw you rejoin I was asking any more comments on a metropolis stuff otherwise that items done yeah sorry I had some connectivity issues uh no no comments from me oh and uh Christian just um because I know uh you've worked on or like mentioned talked on many of the EIPs do you think that we're pretty much at a point where by next all core dev meeting we can say no more EIPs are going in uh for metro so we can kind of put like a hard stop kind of finalize the specs behind them and get test done I mean the proposed EIPs for metropolis kind of settled some weeks ago already right so pretty much yeah the last things were the static and and the or whatever the return data size and all that stuff I was just um I'm more double checking yeah but I guess so yes I mean yeah not sure if we can so I mean finalize the specs that is something that will happen oh no no no yeah that won't be done next week or not that won't be done next all core I mean I'm more finalizing like no other EIPs getting in there which I yeah I didn't think there were any others I was just gonna make sure I didn't miss anything yep cool all right um that's the last official agenda item um I see Martin bees he's in here uh did you have any like cool updates on e wasm or anything you wanted to talk about or are you just kind of lurking uh yeah I'm just kind of lurking right now um right now I'm just working on updating um uh 2MGS TX and 2MGS block cool cool uh let's see any any other comments that's the last agenda item so we can we can pretty much wrap it up just one quick comment um I think we've discussed so since the last meeting we are trying to figure out well on or off just later on with the thought of how we could sort out the transaction propagation stuff to basically allow propagating trans cheap transactions that may or may not be ever mind and uh a proposal that we had quite a long time ago came up again that uh one of the issues currently with uh with transactions is that basically they uh their lifetime is infinite so if I create a transaction it may be included now or it may be included in 10 years and this one kind of makes it um a bit harder to reason about and it's kind of uh so one of the issues I'm not sure if you remember maybe it was last year there was the bitcoin network was spammed and I think there were transactions which lingered in the network for more than eight months and uh it would be nice if we could so propagating cheap transactions would probably be much uh much saying there if we could say that a transaction has a limited lifetime so I'm not sure whether anyone is wants to do such an EIP but if there's at least some opening from the client developer's perspective we could put together an EIP that would kind of state that a transaction could specify that uh I'm not sure maybe it could we could hard limit that a transaction if you if I create a transaction in block I don't know one million it can only live for about maybe a thousand blocks or random number sorry oh Pavel I think you're unmuted unless you had a comment I'm sorry I think it's an excellent idea and I think we could probably make the hard limit a lot higher than a thousand but I think the idea was as good that uh transactions should have either a time stamp field or a block in the field um and which they should expire and we limit how far in the future that can be and the other added benefit is for uh for users for example if uh if I want to create a transaction that that I want to be processed right now because for me it's important that it's either processed right now or it's not processed at all that I could say that okay the lifetime is maybe five blocks and if it's not included in five blocks then okay I don't care don't get it included this kind of uh many times so otherwise we had in the past we had this issue that uh someone wrote a script which they screwed up the script and they sent hundreds of transactions basically with the incrementing nonsense to send their funds to wherever and then every time their account got uh uh got funds it was sent out wherever because in the network the transaction did linger in some queue and they did get executed so it would also help solve these kind of issues where very if you create a transaction it's not included in a limited amount of time then you don't get surprised it's online yeah uh one potential problem with this is that it would break offline sign-ins but you need the nonce anyway and which is a similar thing you do but um with a nonce you can in principle sign something on a gapped machine and then transport it to uh a node and transmit it from there whereas if you have a hard limit then you've got a strict time constraint so I'm doing that so there is an EIP by Conrad which is I think related here and it's primary target is he talked about it in chat actually it's the abstraction EIP which I guess would be is that referring to Serenity's abstraction EIP thing for the future oh he gave up on it uh do you have a link to it Conrad? I think we're talking about different things so Conrad I think it was Conrad made an EIP which was mainly targeted at replay protection and if I remember correctly the idea there was uh that you specify a a block hash and the transaction can only be included if the the block hash is a parent of the of the block or if if the block with that hash is a parent of the block and it's not too old kind of I thought the idea was to put the actual transaction into data field and then analyze it with some smart contract yes but if we want it to be useful for DOS protection then it needs to be part of consensus well actually that's not quite true the other way we can handle this is to change the wire protocol so the clients tell each other when they first saw a transaction well yeah but that's that's messy really fast if you if somebody spams transactions really lots of transactions yes and that client can effectively lie about when they saw it so anyway the question here is rather would this something be of interest we obviously don't have the details yet so this is something just to ponder about I I'm happy to write me for it yeah in fact Nick could you work with Conrad on that because it sounds like Conrad had some good initial thoughts on that and then he's super um angster he really wants to talk but he's microphone doesn't work so if we have an EIP on that um either before next all core deb meeting or the one after that and we could talk about it that that'd be really cool sounds good great yeah sounds like there is interest in this awesome uh is there any other uh comments of any kind before we in the call yes there was one thing that I've been discussing on Skype that I wanted to bring up which is the possibility of uh first of all hopefully it's contentious making uh return data copy throw if you attempt to uh copy beyond the end of the return data and second more contentiously changing call data copies to the same and the motivation behind this is that currently we've had them with zeros but this is more in the say uh assuming what they probably want thing and my own view is that the EVM should hard error when it's not sure what should be happening rather than give a default result um because that would for example combat here is like the recent um exchange where they were sending short transaction data that was resulting in unexpected consequences one question here though that this does solve part of the problem I'm not sure whether you are referring to the attack that was a golem or yeah yeah golem exactly yeah so uh the only issue is that so um so there the issue was that uh you read after basically the user provided some input and you read past that input however for example if a contract if I write a contract that requires that has five input variables and certain code paths only use the first four then I might get still the same effect of uh shifted interpretation yes it doesn't solve all cases but it's a it seems to me that um this way I can't think of a legitimate reason for a contract to want to read past the end of the data and therefore it seems to me that precautionary principle we ought to saw an error if it does so I think I saw in a skype chat that you did a cursor research and the only place you found where it does that is uh some serpent code from auger is that right that's right I'm running I'm actually running it over every single transaction ever but that's going to take a very long time because it only does a couple of blocks a second um but so far every transaction I've identified has been calling that one all good contract and I didn't read Vitalik's explanation but what did he kind of say on that um the reason it does it is it's a certain compiler optimization um it attempts to the goal is read out all of the call data except for the signature function signature at the beginning and it lazily does this by uh reading from 4 to call data length um whereas it should read from 4 to call data length minus 4 it's doing this to skip the subtraction which it sees isn't necessary so as far as I know though that's the only thing currently that's relying on the ability to read zeros okay so it's an idiosyncrasy it's not it's not like actually on purpose for the yeah it's a well it's an optimization that takes down the answer to us um and yeah I mean if all there is the only contracts then perhaps we can find some work around for them or um you know it remains to be seen whether this happens much on the chain um and I'll get back to people the fact that that this never I'm sorry okay so the fact that this never happened doesn't mean that there's no contracts that relies upon this behavior right that's true um but I believe uh we can say with fair degree of certainty that outside serpent uh like that no solidity contract relies on this behavior and at least it has inline assembly um and I do find it difficult to believe that anyone is relying on this behavior except as an optimization for call data length I mean it does complicate it does complicate the ethereum virtual machine and it also makes it less consistent so I'm really not sure whether we should go into it. It's consistent with what? uh any single thing inside the ethereum virtual machine any anything any single data area extends to infinity at zeroes so yeah this is our concern as well it kind of breaks a well defined source data definition for us so the I mean what data sources do we have we have memory which which extends infinitely because it expands as requested um and then we have call data and soon return data are there any others? we also have code yes uh does I mean I would argue that code copy should also um should also throw this error I guess the the thing is memory I think I would argue is a different beast because it's writable um but in the case of input sort of byte array input parameters I don't think we should uh it should be best in return zeros because I don't think aside from like a trivial compiler optimization there's any legitimate reason to try and read past the end of the array I think it almost always indicates an error I would suggest this to go into separate the AP and not the return yeah well I think I think the idea of changing existing stuff needs to go into the IP but I think that the proposal to make return data copy error can be an amendment to the current one like I would argue that uh even despite the inconsistency this is a rule which we're enforcing in future even if we're unable to go in further through existing opcodes what about putting a recommendation into the return data copy eip that you should not access it beyond its end so basically it's and it's yeah more or less undefined behavior or not recommended to do that and in the future it might change I think if we're going to change it then now would be the time I mean yes that would at least ensure that anyone who breaks it breaks themselves by relying on it later is you know because of themselves to blame but I think it would be better to actually just fix it straight away interesting yeah I think that it does sound like it could be an amendment to the eip so yeah so if it I guess the question that comes to my mind is just how much it complicates it like I've kind of heard that it you know it kind of breaks away from the standards is there some kind of domino effect in the future from doing that or that will by doing that will make it even more complicated for other things or is it just simply this one this one thing that would complicate it in the in the EVM can I add one comment for that because I think like contract for for methods without arguments you have like 4 bytes for the method ID and I think here will lie on that we can load actually 32 bytes on the stack lying on that the rest will be filled to 0s yes I agree and I think that any change would have to state that call data the one to fetch a single word only errors if you attempt to if the entire extent of it has passed the end of the array because yes otherwise it would make fetching the function call ID and to onto the stack and possible except by memory in any case I will write up a comment on the existing data one and we can argue when we've got something concrete to argue about is anyone still there yes I am I guess we've come to an end a natural end for this particular discussion one thing was raised on the Skype chat as well this might be useful to be considered on the high level languages and enforce that on the higher level languages by issuing call data size before particle load or copy yeah I disagree with doing it in high level languages but I think that the general principle in the EVM should be that when somebody is doing something that is almost certainly wrong we should fail hard rather than fail soft which is you know returning 0s for instance I personally I would fail in all of the cases if it tries to overread but that might be too harsh of a change yeah I think at this point it's impractical to fail when somebody does call data load on 0 because that's done all the time with functions that have no arguments if it was from day one then I might agree with you yeah it actually this is good to have historical context on this what's the general approach when we come up in the system with something like this do we fail hard generally or was that something that was a design decision early on for these type of things or these type of not errors I can't think of the words edge cases yeah exactly I mean I wasn't there but when looking at the EVM it feels like to some degree this wasn't considered as an explicit design principle so in some places it fails hard but in most places it fails soft another example being is I pointed out 1 divided by 0 which for reference it returns 0 for rather than throwing an exception I think the idea was to only fail hard there's no sense of a way to continue and I think that's entirely wrong yeah I think that's entirely the wrong approach personally I think that's the approach my sequel took and that lead to numerous bugs where it assumes things about your data type and so on I think with anything as critical as EVM code should always assume the worst sure but I mean I think the main question here is should we fix it now and is it is it worth the risk of breaking contracts I think that if we can demonstrate that of all historical transactions you know that it has limited effect then I think it's worth the risk of breaking things I think if we find that there's a would break a lot of historical transactions then that's a different matter and I think that for returned data copy it's worth being inconsistent with prior stuff in favour of doing the right thing wouldn't the broken contracts your scan fail to find because no transaction trickle the bug then be resources for some hacker to use to cause damage later by causing these contracts to fail what sort of damage would they cause by causing fail I don't know enough to know it just makes me nervous that we make a change in these contracts that would work before to not work I understand the concern we have done this in the past like when we reject gas costs that can cause things to fail in the past are you concerned or not I personally am not overly concerned because I think that breaking contracts or potentially breaking contracts that used to work is an inevitable consequence for a lot of hard forks like the gas re-grossing for instance so I think we need to be cautious but I'd see them as a potential for breaking things or you know for an attacker being able to cause havoc because a contract now fails that we used to okay I could be convinced by a concrete example of course okay so it sounds like more research is to be done and so Nick's doing that he's going to write up some of the more formal stuff and then we can duke it out next all core dev meeting cool so any other comments other stuff in general sorry again for having this at the wrong time Jan I think you just got here and I totally started this an hour early I started this at 1300 instead of 1400 so apologies I'll be releasing notes Jan left oh well thanks everyone for coming I'll be releasing notes in video later bye good money bye