 Uh, my cat is a core developer, um, specifically with, uh, creating a implementation of the Ethereum client, uh, in the esoteric programming language brain fuck. So yes, we know about that one. Yeah. Yeah. So that's what my cat's up to. Oh, that would be the language for cats, yes. This is a good start of the meeting. Definitely. I woke up with 15 minutes to spare. I woke up an extra hour, 20 minutes early because I thought that 1400 GMT was eight AM my time. It was actually nine. Oh, there's things on the internet to make that really clear. Yeah, I've heard. You type in 1400 UTC and it will tell you like what the right time is in your time zone. Unless you have your IP really like hitting the way. That is very true. Um, let's see. So let me attach the agenda. So that's going to be. So I just put the agenda and get her and on the internal chat here. So we'll wait just another second and then start on up different than yesterday's agenda. There's been just a couple of updates from Yoichi. And I'm in the process of editing the accepted EIPs because we forgot one from one of our discussions on metropolis. Okay, so we have Parity, CPP, EVM, Ruby, Pythorium, Go, and I think that's it. So, hi, Andre. Which development team are you with? C++ team. Oh, hi, nice to meet you. Okay, yeah, I think I saw you on Skype. You have Cattail's side of dogfarts. No one else compared them. And with that from Greg, we'll kick off this meeting. So the first agenda item is EIP signaling and voting system update. So me and Casey, Casey and I talked to some people from the carbon vote and boardroom and metamask teams and where we formed a group to create a voting system on blockchain that also connects to GitHub in order to show support by individual and by group for certain EIPs. So, yep, that's in progress. And some of the first looks at that are pretty good. And then Casey, do you want to talk about the overall idea of your page that shows the implementations and which client's implemented on? I guess not too much to say really if you've already seen the table. It's not mandating that, you know, that client maintainers implement any particular RPC method vote for an RPC method, you know, approves or adds or changes an RPC method doesn't mandate that a maintainer of a client or a core dev, you know, has to follow that specification. It just means that there will be a red mark in the table is all. Okay, cool. So yeah, it's so that's about it. Cool. Yeah, it's kind of like what's the other what's that kind of based off of Casey? It's the who else uses that? Was it a JavaScript, I guess? Yeah, the ES, the ES6 compatibility table and stuff, it's linked in the in the EIP and the issue. Okay, great. I'll put that in the notes post meeting. Great. So next is going to be EIP 225. This is the click proof of authority protocol and rank B proof of authority test net. So, Peter, you can take it away on that. I'm just going to link. I'm wondering. So basically, it's kind of meant as a solution for our constant test and problems. Last time we obviously Rob stem blew up, we all know why it blew up. And last time you kind of agree that maybe you should try to somehow revive it, but that didn't turn out both so smoothly as we expected. And the problem is that even even if you do try to somehow incentivize reviving Rob stem, it's it will always be attackable if somebody really wants to do it. And maybe if we start up a 10 GPUs, it can sustain some attacks. But if somebody really wants to break it, it can break it. And the problem is that kind of everybody was in the community blew up when Rob stem went down because that developers kind of rely on it to be able to test their stuff. We even though it's not proof of so they most of the developers don't care what consensus and your test that runs on as long as they can depend on it. Many people kind of started rolling their own private test that the coven appeared and others. It's a bit messy. And then the entire suggestion behind the DC IP is that we can, well, my suggestion is to relaunch a new test that based on very, very simplified proof of authority protocol. Now, the proof of authority protocol that's that I kind of called click is kind of so the basic design principle behind the protocol was that we reuse as much as possible from the existing consensus. Basically, it should be able to integrate the great interesting clients with as little hassle as possible. And yet again, it should play along with all current technologies like clients, fast sync, warp sync. It should not require clients developers to kind of update their entire code base just to support this. And while I won't go over the entire spec, but the idea is that the entire consensus protocol is implemented. It's as part of the header fields. And there's a small voting mechanism so that signers can vote to add new signers or to drop existing signers. And the only field that is significantly changed is the extra data. But that field is dynamic in the original aspect. I mean, in our current protocol, basically we can do whatever we want with that because it doesn't break any rule, any networking rule, serialization rule, etc. And the protocol is kind of, I won't go over the entire thing. You can read it. It's basically just a very simple, elegant proof of authority protocol that plays nice with everything. And just to point out, we did also an implantation in Go. And we kind of, I'm really proud that it's extremely self-contained. So the whole consensus engine is extremely commented is 500 lines of code. And the whole voting mechanisms, again, extremely commented with some other stuff is 200 lines of code. And basically this is two files that can be plugged into the existing systems and it kind of just works. And this was one of our core, my core requirements that it should be something that's trivial to add to all clients. Because otherwise, if clients require a lot of work, I mean client implementers, then it is the purpose of running it as a test nut. So that's the LDR. Okay, cool. So is there any comments from other client devs specifically if anyone has just initial thoughts on potentially implementing that into their client? And also, if there's any concerns with this? Arcadi from Parity here. So, yeah, it looks reasonable and also will probably end up implementing Peter's proposal and also maintaining our role on contract-based consensus engine for proof of authority. As for the timeframe, start to tell at this point a couple of weeks, maybe. Okay. Thanks, Arcadi. I guess. Yeah. And then, oh, Vitalik, are you back? Hey, Vitalik, can you hear us? Yeah, I can hear you. I just switched to a better connection. Oh, perfect. Okay. Great. And then Jan, what are your thoughts on this? Hi. I think once the standard is settled, the code should be very easy to implement in Python or Ruby. Okay. Excellent. So yeah, what we'll do is Peter's already written a really very well-written EIP. I'll work with him to transfer that into a PR and we'll do a final check over and pretty much considered accepted. And, Christian, just to make sure CPP is on board with this, I thought I saw something in like a get or chat that y'all were good. Yeah, sure. So we'll probably take longer than Parity because we're focusing on Metropolis now. But yeah, it looks good. All right, great. So that one is accepted. We can move on to the next item, which is EIP 214. So this, I believe this is kind of shortened into static call versus peer call where should static call do a state changing operation and actually throw or just be reverted. So this was discussed a little bit on the channel by Nick or Martin. So I don't think I see Nick in here. Martin, could you elaborate on this and give your point of view? Yeah. Well, I guess it's basically, since we have revert, it is possible to do a kind of synthetic peer call by having an initial bouncing through, basically a revert in operation, which returns the return value in the revert. So doing that kind of call will allow you to make pure calls that which are not hindered. So it's kind of an ugly backdoor into pure calls. Hold on. Do you mean pure call or static call? I'm not sure what I mean. Okay. So static call can read, but it can't write. So a pure call can neither, can't write it and also can't read. Yeah. So this would be able to read. They are able to, they're kind of able to write, but anything they write will be reverted. Wait, hold on. Sorry. So how do you get the information out but still do the revert, but still revert anything that was written again? Yeah, because the fun thing with the revert is that you're able to get a return. You can pass the return value as it was eventually defined. Oh, I see it. Interesting. So you can bounce through a revert and the effect you can have is that you can call stuff and that stuff can store intermediate values during the calculation or whatever and it can access the state. It can return and then you revert the return value instead of returning. So that does, it's different functionality, right? Because in that kind of situation, the state could still change in the middle of the call and so it could still have weird re-entry attacks through that. Yeah. Yeah. Well, no, because it would. Yeah. So I mean like, what if, no, but I mean like a text where let's say A, like A calls B and that call is supposed to be a static call, but then B calls A and then A calls B and then that does some state changes. And like basically then the inner call makes some state changes in that influence of the outer call. Yeah, but if the envelope call is going to revert anyway. Well, no, but the point basically is that the outer call. So let's say the outer call, so B, so in my case B has some state. And the process of calling the function starts changing some state. Then let's say at some point then after that it calls A and then A calls B again and then that changes a bunch of state. Then after that, once the execution goes back to B, like the execution of B is going to continue, but the state will have been changed in the middle. So it's like a revert, it's a re-entry attack that's like one level deeper in. Yeah, I'm not sure I follow you, but the point I was making is that as soon as we implement the revert opcode and midpoint this out also, then it will be possible to kind of abuse the revert functionality in order to obtain a side of it free calls. And yeah, do we want all these different kind of mechanisms? Right, I see. Okay, I'll think about it, it might actually not be an issue. No, it might not be, it's just an open question on the Looker Devs channel. Right, actually if I can't think of any problems with that then that definitely does seem like an interesting way of doing static call. I think there are two reasons against it and one of them is you will end up with two calls. So essentially the proposal here was that in your contract, if you're trying to call something with static call to another contract, first you will call yourself another function on your contract, which does the call and does the revert. And the second reason is if we have an actual static call, then with static analyzers you could be sure that it will be a static call. Right, otherwise you need to... So the thing that you would do with static analyzers here is you would do a call and then you would do an assert that says that the value that was popped onto the stack by the stack opcode basically said that it got reverted. So one thing that does concern me though was that if you start using it in that way, then you can't really distinguish what if the revert opcode pushes a zero onto the stack, then you can't distinguish between successful reverts and exceptions inside of the call. So it would almost seem better to push it to onto the stack or something else. Well, the static set inside the forwarder, right? And the forwarder always succeeds and the fact whether the internal call succeeded or not will be part of the return data of the revert. Kind of, but like even there's still going to be cases where you can't really distinguish. If an internal call fails, then you can't really tell the difference between that and it returning zero data. Yeah, there was a separate proposal to change the return value of call to have three states. Yeah, okay, I'd support that. Yes, I'd support that. And I would totally fix this issue. Well, that's not something we have planned for Metro, right? Having three return values for call. I mean, it's something that would be trivial to implement if we agree on it. Yeah, but it would be definitely not backwards compatible. Well, hold on. So the revert opcode right now doesn't exist, right? So if the revert opcode... No, I thought we were talking about what call returns. So what call returns is not that easy to change because of all the existing solidity contracts. Right, but I mean, what if we have zero representing an actual throw, one representing and a correct output, and then the number two representing the revert opcode, then shouldn't that solve it in a totally backwards compatible way? Current solution, let's check for non-zero. Yeah, so that's fine. Because, like, unless... Oh, I see. Right, so I guess the challenge here is that the throw opcode has two use cases where one of them is the signal or failure, and one of them would be to abuse it in this way to return a static value. So if we have a... So, okay, let's start from further back. So revert with return data only makes sense if we have at least return data size and perhaps even return data copy. Why? Because the success type, the success data type and the failure data type, these are two different things. Right. They can have different sizes, so we need dynamic returns. Sure. Although... Well, I mean, we can still do it with... I mean, we can establish, I think for most use cases, the failure return data type just being 32 bytes should be enough, right? Like, cases that require the failure return to be variable size seem limited. But then you need... Okay, assuming we have return data size, then we can use that to distinguish throw from revert. True. Revert without data is basically the same as just a throw. Right. What would the assert be after a call then? Would it say returns? Like, would it check the stack value for one and then or the... And then or that with the return data size? So what's the exact question? So, like, how would high-level programming languages implement that work with all of these things? Are you talking about the static call, workaround, or other solution? Kind of both at the same time. So if the static call, workaround use case and the use case of using throw to actually signal an error. So the just regular throw-revert mechanism... Yeah, that has not been worked out correctly how to do that. Basically. The high-level language. But basically, yeah, you check for failure and in the case of failure, you just access the return data and that goes into some kind of exception object. That's it. Yeah, it's a bit unfortunate that one is success. Yeah, so is this all just for me to kind of keep up with that? Is this all relating to EIP 214 or is this a culmination of EIPs that build on each other? It's, I think, a combination. Okay. Sounds good. So yeah, 214 is the new opcode static call and it sounds like based on that other things might need to change. Would someone be able to cross-reference these EIPs in the repository? And what I mean by that is either tag them across each other or give an outline more or less of what was discussed today? I think we didn't discuss more than is already written in the issues, did we? Yeah, that's as I think I haven't looked through all these EIPs. Yeah, potentially. So what we can do is we can bring this up in the next all-core dev meeting. But this, I think someone mentioned this isn't something that's going into Metro, right? No, I think it is. It's down for now. So the question is the proposal was to not include static call because it can be simulated by other mechanisms which will probably also make their way into Metropolis. Yeah. So one kind of side question, are we also interested in pure call? I mean pure call is fundamentally different because it doesn't correspond to another address, right? Yeah, I mean like it does require some of the machinery because like in the case of static call you have to do something to either disable or throw on any write operations. And in the case of pure call you have to do the same thing for both writes and reads. But although I guess doing just one is definitely simpler than doing both. So okay, so I personally definitely feel a bit more comfortable with return data size if it means that we can kind of simulate static call without actually doing static call. So I think it would be good to code up a specific example of how to do that exactly. Sure. Yeah. Yeah, that sounds like a good idea. Would that have to be across multiple clients or just one example? By code, do you mean like an implementation inside of Ethereum code or do you mean like an implementation of how it gets used in high level code? No, I was thinking about an assembly implementation. Oh, yeah. So in that case it will be done in like Serpent or Solidity Assembly or LLL or whatever. Yeah. So the high level thing, I think that that needs to be discussed in separate, but it's a bit. The main problem here is that when you look at Solidity, you can't really distinguish an internal from an external call. And the mechanism is of course different for internal and external calls. So that might be a bit complicated, but yeah, we have to see. Okay, sounds good. To give a summary, Casey and Alex in the chat put some of the ones that cross-reference this issue a little bit. And Christian, would you all be able to talk about, do that implementation you were talking about and just kind of post that across the EIPs and maybe the Gitter channels? Sure. Okay, great. Casey just had a comment. Yeah, we can move it into the under consideration table. That's a good idea so that they can all be in there and then we can concatenate any that we need to. So any other comments on this? No. Okay. Oh, and then Alex, you typed in chat something, did you have a comment? Well, no, it's just Casey said if maybe we should move pure statistical revert and data size into under consideration for Metropolis. But I think revert itself is standalone. Yeah, exactly. Like revert is seems very uncontroversial and it's useful even if none of the other stuff gets implemented. Okay, sounds good. But if it is dependent on return data size. Yeah, I thought Christian said that revert isn't very useful without return data size and return data copy. Yeah, it's much more useful when we also have these other two. Okay, so let's keep it as accepted then. Okay, sounds good. Yeah, so it's useful with other things but it can also stand on its own even without those. Okay, sounds good. Cool. I think that's all the comments for this. So the next item is EIP 161. I believe that's kind of already dealt with more or less. Yoichi, do you want to just do a brief summary of what you commented on in the agenda page and how that's being resolved? Yeah, sure. So basically it's about the yellow paper status and about Metropolis things look good but there are other bad stuff. Good news first. So Metropolis, I've created the prerequisites for most Metropolis EIPs except elliptic tabs and I visited Gavin's office and within my discussed Metropolis ERs and now it's clear to me what he wants, what he doesn't want. Things look good there. But I have to do a little heads up here because the yellow paper on the master branch currently is still at homestead even before Sprius Dragon, the previous book. I once submitted Sprius Dragon to request Gavin merged it but he reverted the changes and he says he wants some cleanup first and Gavin said he and I should meet more frequently so we will be meeting regularly, working on the yellow paper together. And a background to all this is the yellow paper is not under an open source license or creative commons or no license so it's just Gavin's authorship and some other contributors, copyrights. So I need to keep working with him and I will be persistent about that. Okay, sounds good. I saw that, I just wanted to know if this is, oh go ahead. So yeah, I had an additional concern about EIP161. Oh that's exactly what I was going to bring up, yeah I was going to go to your comment and get her but go ahead Arkady. Okay, so yeah so EIP161 allows to delete empty accounts but it doesn't clearly specify if a pre-compiled contract account can be considered empty when it doesn't have any storage. So my interpretation is that it cannot because it contains our native code and this is not an issue for the main theorem network currently because all of the pre-compiles have some balance currently there. But it might be an issue for future networks in the new test network or the private network or whatever. And it would be good to clarify that and specify it in the yellow paper as well. I can explain the yellow paper status in my previous Dragon pull request. It's specified that the pre-compiles can be empty, their code is empty and they can be removed. And then I believe that's what other implementations do currently. Okay well about this proposal it's very easy in the yellow paper to implement this particular proposal so that pre-compiles cannot be empty. So as far as the yellow paper is constant whichever is fine if consensus is gained. A question related to that. I'm assuming that they won't be existing in the state trial though, right? So the native code this thing will not be in the state trial. You are right. So the way to distinguish that is actually so there must be an in protocol mechanism to distinguish pre-compiles and non-pre-compiles. Otherwise we cannot distinguish empty usual accounts or empty pre-compiles. So we need a mechanism to specify which address contains the compiles. Well if all the implementations already go with that they can be empty it might be reasonable to just keep up with the de facto standard. But I would like to confirm that this is really the case. So I looked at the Go theorem code and this seems to be the case. Yeah so that was the case before the Dragon hard fork and I think I don't see the issue with keeping it. Although like implementation has to consider like having empty contract that actually returns some data in calls. But that was the case before the hard fork and I don't really see the reason for the change. Yeah what is the reason for that? No I just wanted to say that I haven't checked that part of the code for a long time in Go theorem. I think so there was at a given point there was a mini fork in mainnet when geth and parity went to different directions and it turned out that both the clients had some bug. And as far as I know the issue was that Go Ethereum did not delete something because it had a rule that pre-compiles cannot be deleted while parity deleted it. So I'm not sure whether we delete it or not but I kind of agree with Arkady that this is kind of a weird corner case and it would be nice to specify. Especially if you want clients to play nicely on non-main network. More precisely this consensus issue was about out of gas happening during pre-compiled execution. So after the previous Dragon contained the empty account removal thing empty accounts were removed when they were touched. And the question was when a contract was being executed but out of gas happens if okay the question is if this contract is touched or not touched when the out of gas happened. And usually when out of gas happens everything should be reverted so state removal shouldn't happen and the account was not touched. But for pre-compiles it was different and that's the state now. So we currently have an exception on pre-compiles that even if out of gas happens during pre-compiled contract execution they are touched and their state might be cleared. I have a question about how can this I don't understand how this can be a problem on private networks since we don't create empty accounts. I'm missing something but I don't know what I'm missing. No this is only a problem if people start creating a new network with homestead rules for instance and then create empty accounts and then the cleanup. If people start with after previous Dragon rules then I mean they will never see an empty account. Right but if we implement this this will be part of a hard fork. So I mean we can't retroactively implement this. I don't really understand. People can start private networks with three previous Dragon rules and set up empty accounts. That's the only case. Yeah so personally I think that over time we should be deprecating support for old rules that's more and more. So I don't see much of a need for supporting people making hard forks with previous Dragon rules. Sorry for people making private changes with previous Dragon rules. Okay so it sounds like then that would just be a clarification at the yellow paper level and an assurance that all the clients are now incompatibility which I believe was fixed during the hard fork around November I guess Thanksgiving time. I think there was a pre-compiled that was deleted on main net is that correct? Yeah. A question though regarding private networks should it be assumed that the hard forks are dependent on each other? I mean if only Spirit Dragon is specified from block zero should it be assumed that all the previous hard forks also are applied at that block? We had a discussion about exactly this with Jeff and Felix a few days ago. He's reworking our Genesis handling and what Jeff's proposal was that how it would be so awesome if people could turn on and off individual soft forks. But in my opinion the problem is that this will open up a huge kind of worms if there's some unforeseen dependency between hard forks. It could really mess up. Yeah so the implementation in C++ is also assuming strictly ascending hard forks? Yeah it would be good just I mean it's not critical but it would be good if there was some general consensus not that. Oh Vitalik could you mute? Oh sorry hold on. Okay so it sounds like I don't think this is requiring of an EIP because this was more or less dealt with in the one of the last hard forks. So yeah I think everyone am I right to say that pretty much everyone's in agreement of how the changes are going to go and it's more I guess at this point if we're all trying to figure out if it's worth it to do backwards compatibility for this if someone wants to pick and choose what previous hard fork changes they want to put in their private network so are we kind of more or less saying we don't really want to go that path because it overcomplicates future improvements? I'm not a client developer I've been interested in what Peter and Vitalik says about it and Chris? Yeah I guess it doesn't matter that much but it would still be nice to clarify that in the old paper. Just update it to the current state of implementation. Okay great so yeah it sounds like the spec is important to be updated go ahead Peter. No I just wanted to say that the only thing I so if we do specify that they can only occur incrementally and you cannot cherry pick whatever you want. I think the benefit of that would be easier reason to think about security of the entire chain. Okay sounds good so yeah we'll do the update to the yellow paper is there any other comments on this otherwise I think it's pretty much resolved. So I will clarify that in the previous drug on product quest I'm trying again. I think you cut off there at the end Yoichi repeat that one more time. Ah okay so I will clarify this point in my previous drug on product quest that I'm trying to fight again. Oh great okay sounds good. The next item is okay that was four five is the metropolis updates and finding a central location to document EIP is going into Metro. So in the Gitter channel I will repost this but I posted three links and then Casey added a fourth for where EIPs should go and let me get the fourth one real quick. So basically there is a list from parody within their github PR system or issues system. There's a list from geth I believe it's a pull request and then there is a page on the PM repo that has kind of a it was a really old list of all of them so that's an accurate I'll be updating it. And then I feel like the one that is the most updated is the link Casey posted which was the EIPs page accepted EIPs plan for adoption. So if everyone can go there it has a list of EIPs ready or that are planned for adoption and are in the process of being implemented. So looking at this across the other across the other clients that are implementing this it's mostly the same list. Does anybody see one that's missing or any comments about the ones that are added. I think they've all been accepted in previous calls but we may have to check on that and clarify that in future calls. Yeah because it looks like I think at least in the case of parody 86 and 96 are not in their list for the Metro release. So I was wondering about that if you could comment Arkady. I'm pretty sure they are maybe check the numbers and brackets. Yep yep I see. Yeah I see because yeah the numbering is weird. Okay thank you. I see. Yeah. And then let me just look at get real quick just to make sure that all of those are correctly in there as well. So we have 198 140 98. The pairing pre compile return data. Okay. I think they have all of them. Go ahead Casey. Maybe maybe we'll just add a column to that table with that contains short links to you know teach client for request you know where the the implementation is taking place. Okay great. I guess lastly. Jan I think you have a PR or a issue that lists all this right. Maybe I saw that for something else. Sorry. A PR setup for what. For the metropolis changes. So just the EIP is in their level of implementation per client. I don't think so. I have no PR setup for you mean Ruby client. Yeah all right that or Python. I just hadn't looked at them. No that there's no one in Python setup for metropolis I think. Okay no problem. I was just going to check so we could add that link or not. Okay cool. Any other comments on the list of accepted EIP's plan for adoption. Yeah I was going to mention that you know it helps to use descriptive titles in the you know in the in the pull request that people open you know that that specifies the EIP. Because then that title whatever title is used in the in the IP then you know it then goes into the the table. And so where the titles were too short or descriptive like like EIP 86. I took some liberty and and inserted you know what I thought was a more descriptive title. Okay great. Yeah I think that's a good idea. So everyone who's doing some of the EIP PRs that that's a that's a good tip to help them be a little bit more clear. I believe there's so I didn't actually add this to the agenda because I just realized this is still something that needs to be brought up. But the there's a list of EIP's under consideration. So I know 211, which was the return data size and return data copy is still, you know, being discussed. But EIP five gas usage for return and call. What's the status of that Christian is that I guess I forgot if we've talked about it in a previous call or if there is a status on it. So my preference would be to replace that by returning the copy and returning the size but yeah. Okay so it's not even really metallic voice the different opinion last week in the last meeting. Got it. So yeah any updates on your opinion Vitalik or was that kind of related to the previous discussion we had in this meeting. My opinion on which topic. Whether or not so on the EIP is under consideration on the read me the 211 is what we discussed earlier. Another one is EIP five gas usage for return and call. Saying that yeah you were more. Yeah I mean I still feel like five is simpler than and more ideal than that than 211. I mean I'd be willing to go with 211 if it also means that we can get rid of static call though I think there might also be an even cleaner way to do like five and accomplish the same goal of static call in some other way. Like basically my main concern is that we should be trying to like minimize opcode inflation as much as possible. Okay sounds good. Okay so that's going to still be under discussion then let's try to think a little bit more on that and next all core devs call start to come to a resolution. I would recommend just because of the timing for the metropolis things. Yeah speaking of which I believe Greg did you have a comment. Yeah I don't care about opcode inflation per se but I don't have the feeling that this little collection of opcodes is sort of a logically consistent and minimal. Yeah like it's for me personally opcode inflation is maybe not the right word like I can handle opcodes but the thing that may be uneasy about return data size is that adds also another data structure into the computational stage. So I guess also that kind of so it's like another sort of unique dimension to complexity in some ways. Yeah but it can simplify other things. Yeah I mean I didn't see that. But we have to pull it all together. There's sort of unrelated proposals that haven't been pulled together. Right yeah like maybe we need to kind of come up with a few competing alternatives for like how all of these things match with each other. There's clearly interdependencies. Would it make sense to have a concentrated call on these issues somehow at some point. That seems sensible. Like a kind of research meeting or whatever. I mean we could even just have an ongoing Skype chat over the next two weeks. Yep. Yeah that definitely sounds reasonable and another thing. So yeah the ongoing text discussion and then someone starting calls if needed. Another thing is it sounds like part of the way to alleviate the confusion is to start coming up with like a couple of paragraphs of different. I kind of see it as you know EIP is building off each other and proposals building off each other so making sure that's all understood. A lot of the time when we're talking about these things we're saying you know it's dependent on the rest of these either being implemented or the way we're going to implement some of these other things. So getting a clear understanding of that. I'm not sure the best way to do that. I guess the ongoing Skype chat is probably the easiest. Cool. Okay. So yeah what we'll do is I will send out a link on Gitter about the ongoing Skype chat related to this. I'll just go ahead and set that up between Chris, Greg and Vitalik and then anyone else who wants to conjoin based on that Gitter link. Great. There was oh Vitalik you had mentioned somewhere online that because of it was either the price increase or something else. I guess we have a little bit more time until the block times are adjusted. Yes. I'm actually like again probably run through the math right now again because it changes literally day by day. So the most recent difficulty is ETH stats as being slow. I'm not even sure why I'm using ETH stats. I didn't even think ETH stats was oh yeah ETH stats is kind of wonky right now. I think. Hold on. I'll just use my white node. Okay 195 trillion. So that's even higher than the last time I did this. Okay. Then 195 trillion. So that means our hash power is 14 trillion. Then block number 3368795 on date plus percent as fetch the current time stamp. I'm going to script thing again. Yeah. So by the end of June the block time is only going to be around 19.5 seconds by the end of August. It's only going to be about 28.5 seconds. So like I think we still have like it's still but basically if we get it out before the end of June it's even better. And if we get it out before the end of August it's still like less bad. Okay. So like my recommendation is probably still like end of June as a kind of normal case target at the end of August as a worst case deadline. But like basically the conditions that users would have to go would be end up living through an either case or continue to get better and better. Okay cool. Is that in any way dependent on the price increase or yes. Okay. So in general the difficulty of a blockchain is a kind of very lagging moving average over the price. So I would I would say first of all in any the chance the probability is probably like less than 10 percent that difficulty will ever go down again. Like it's almost certainly going to be above 200. So then the question is is this the top and is it going to crash back down to 20 to 20 to 30. In which case I could see difficulty topping off and maybe the 250 like the 300 range is it going to stay the same as it is now in which case we could see difficulty being in the 4 to 500 range is going to still keep going up. I mean potentially like if in the kind of sky high scenario where if the RAM takes over Bitcoin entirely we could literally see this delayed until actually let me see what let me see what happens if I just multiply hash power by another factor of 10. Yeah so in that extreme case at the end of June the block time is still 15 seconds and at the end of August it goes up to 21 and only at the end of the year it goes up to 43. So we basically the better the better we do the more kind of the more time we have. I would say in the more kind of in my personal estimation of the kind of 5050 case I can probably run through that right now. So that would be difficulty in Davy topping off at 500 billion in this maybe middle of the summer or so that would or even just to be fair let's say 454 20 billion that would get us to also 19 seconds at the end of June but then the 19 seconds would keep on going all the way up until the up until the middle of July. Maybe we should switch our efforts from implementing the EIPs to pumping the price. Yay let's get linear results for exponential work. You go ahead. No just thinking I think I was just wondering whether or not as the idea was to go on blockchain and so on down the block time will also help but I think that might be a maybe it might. Basically it looks like the practical thing to do I think is to just like keep our heads down and make and see if we can finish this by the end of June. Yeah I mean we can always try but one thing I would like to mention to all of the people who are implementing these EIPs because when you do that you find edge cases and you think about edge cases and you maybe even make test cases for these. But if they're documented so we can make sure that they become consensus tests in the actual Ethereum test repository also. So we have a good test coverage of this because there's a lot of changes. Yes it's difficult for Dmitri to keep up with all the changes. So this needs to be a concerted effort. That's it. All right did we lose Hudson? Oh can you hear me now? Yes. No yeah my microphone went on mute. So no I was saying so Martin when you were talking about that did you mean people should write their EIPs very specifically in mentioned edge cases or just. No I meant that we need to make sure whenever we during the process of implementing these EIPs developers need to make sure that all these weird edge cases that they come up with that they're submitted so that they become test cases. So that all the stuff we learn while implementing it can benefit the other clients and we don't have any consensus issues because they make sure they wind up in the Ethereum test repository. Okay excellent. Is there a formal process or even just an informal process best way to get these to Dmitri or to whoever needs it? I don't think there's a formal process. The informal one is to just ping him or me or anyone in the there's a Skype channel Ethereum dev tests and there's I'm guessing all core devs would work also. Or wins Vega on Twitter. Okay yeah I'll talk to him about getting a more concerted effort Peter. I just wanted to add to that that I think one of the problems with this. It's a really noble cause. One of the problems that I see for client developers is that, for example, when modifying or getting previous forks. We made a modification and it turned out that we couldn't think that's not because something we changed something but the change the error wasn't kept caught by the by the test suite. And then obviously the no brainer would have been to just submit a new test to the test suite. But the problem is that it's so complicated and messy to create a new test that nobody really bothers. So probably a long term solution or long term suggestion from my part would be if we could somehow create a small tool that allows a simple way to make tests. So that if I know that this is my corner case so that I can somehow transform that into a test easily without having to manually create everything all the blocks. Yeah you're totally right Peter and definitely that's something that should be. But in the meantime, I mean I wasn't meaning that everyone needs to submit actual test cases. Just document the scenarios or the correct cases. For the last hard work, I think we set up one of those, what's it called, we just type the cases in the paper pad, something like that. Just something like that, short description, verbal description. So then Dimitri or someone else can take over and make an actual test case out of it. That's what I'm talking about. Okay sounds good. Yeah and I think over time there'll be more like a more clean process like the kind Peter's referring to. So yeah I'm sure Dimitri would be very happy with that suggestion. Okay so any other comments on that? Just a short comment. Axig already wrote the static call by revert solution. It's a comment on the other board request. Oh yeah and it's also a github just in this chat room we're currently in. Damn Alex you work fast. Okay so yeah I think that's actually all the agenda items. Let me just double check but was there anything else from anybody? Yeah no I think this is good. What we'll do is for the last item I guess was just where are we going to put all the metropolis stuff. We'll use the readme page on EIP so if you just go to the EIP's repo it's the first page on there. So it'll say under consideration or accepted and that's what we'll go with for now. Just a reminder if you all have any EIP's you want to throw in please add them to the agenda. I usually try to set that up at least a week before. I'll start doing it right after meetings though and I'll release the audio and some notes on this some time later today hopefully. So yep good meeting everybody. See you all next time. See ya. Bye bye.