 Oh What about now? Oh, I see Yeah, I just don't think he was talking and I just updated the agenda by the way, and I think I posted a link, so No, not exactly I know Christian is gonna be oh Christian just joined okay, so Yeah, we have a good amount of people so far definitely enough to continue with the meeting. There's parody geth Ethereum J C++ Python we just need some go people oh And we also have a Ethereum J s Martin BZ representing Good morning Conrad looks like I'm in the back cave or something should probably turn on a light Good morning, Alex pretty good Let's see I might go into the Go channel and see if anybody can join is anyone from going here that I missed Daniel's on now. Oh perfect. Okay. Hey Daniel mean Daniel are sitting next to each other. Oh Oh Okay, and Jan's here from Ruby. Good morning, Jan or wherever whatever time it is where you are, right? So yeah, Daniel's here then I think we can start Let me see what's the first item my page closed it is oh Christian it's the EIP to 11. So if you want to get started with that you can go ahead and Explain it and all that. So I think we already talked about that last time although it was not officially a P if I remember correctly Yeah Another solution for the problem that it's hard to return dynamically sized data from calls and it's the the proposal would be to create a buffer that is similar to the call data buffer which doesn't have a position memory, but it's a separate buffer and Contains the full returned Backstream and there are two opcodes one of them is which I made a size that which returns the size of this buffer and Which I made a copy which can be used in a similar way to call it a copy to copy data from that back string into memory There are I think there's there's one concern This is the the problem that The memory of the callee has to so it cannot be freed immediately after it returned and this kind of changes the memory problem is a bit and Yeah, I would so yeah There was a proposal to Basically reset this buffer Whenever memory is resized and I didn't really get why that Can help so Could someone explain that perhaps? It's Nick here. I don't see him in here yet. I'll send him the link to see if he can join I think Yeah, I think if y'all can pull up the page, but Gavin I think it was Gavin who suggested it, right? I Yeah, he updated very about eight hours ago. He added a comment to Are not Necessarily resizing I don't like his suggestion because I feel like it would require us to add in a bunch of extra hoax Yeah, in order to figure out what what what what? But what there are other things like M store call data copy code copy and so on where We check every size memorize So can you just remind me again? What was wrong with like some of it some of the earlier proposal also things like eid 5 and 8 where it only tried to What was it when? You did a call it would only fill up as much memory as it needed to instead of fulfilling up the pulse wise Yeah, the whole The gas calculations were really complicated that was the main Drop-back for both proposals, I would say Hmm, is it just a fact that you'd have to add another gas check that's like in the middle of The execution not just at the start or is there something else? It's like I remember implementing it in Python All I did is I just moved over the one of the memory expansion Check checks from the start of the code to the part of the code that actually allocates the memory at the end Then it didn't seem more complex So one of the proposals has to drop back that the call up code can fail after So yes, I think The other version had to draw back that the call me kind of has to serve a bit of gas For the resizing the call as memory Yeah, oh, right the problem with this one is that I feel like it First of all it kind of it increases their complexity of the Object that has to be a kind of past around as computational state. So along with Memory and stack and gas we also have to worry about return data But it can be generalized as multiple segments of memory Right I don't see that as a complication I mean return it as exactly the same object as call data or is it's very similar and then You just have another one. I Think it's it's better to have More objects than more complicated Um, I mean I still don't So one chance can kind of like this to awesome. I can handle it very nicely this way also so another benefit of this is that I Imagine call as a kind of interface that might also work between two different kinds of virtual machines and If you specify that that is also so you don't have to specify the The memory can be an internal part of the motion machine if you basically pass on data and call it So the other thing that This kind of the right reason why this feels our latest to me is that it kind of adds in technical debt because now for Some weird reason that every time the call finishes instead of copying data once you have to copy it twice or at least you'd have to copy some point here twice or Sorry, I paid once is that the argument or Well, it's not Just about the fact that like this lots of extra weird stuff is happening Some of which is it ends up being kind of vestigial what we have to keep it around You So So after that if this gets implemented then there's going to be two ways to Access the data of a call one of them is going to be to access it from the default buyer of return data by the way And the other one is going to be the access to do a return data copy and access it from there And generally having two kind of redundant ways of doing some of doing something is that really a nice programming practice That's because we have to retain that what's compatibility, right? Well, right even still If we follow that principle then like I'm worried the 30 years down the line we're going to have 50 of these in our dirty backwards compatibility We can so we can't say that return data is empty if the return area of the call is Non-empty Which doesn't really simplify, but at least there's only one way to do it at a certain time. So You can't combine these two ways So would it help at all to have Implement or like little test implementations to see just how complicated it is or would it be Kind of not kind of pointless because we're still stuck on the idea of it potentially being Complicated to keep maintain backwards compatibility down the line Yeah Another it's a small change to a rule that is already quite complicated This is And this one adds a new object to the state as you already said hmm Okay, so it sounds like this might not potentially be a good candidate from a tropolus due to some questions, but if between now and the next all-court dev call there can Either be an alternative or people can Come to agreement on it Or I guess come around to the idea of implementing it. I think this can be brought up again But yeah, I think we should continue the discussion on get hub unless there's any more comments so, yeah, I Think this solving the small problem is I would say we delay the tropolus because of that but Yeah As important as other rich novel CIP is but I thought we'll fully support continuing to Cool sounds good any other comments Christian All right sounds good, so this one we will just continue on Get hub so let's see the next one is the Robston spam attack so I am not up to date on everything. So if someone doesn't mind just kind of in a nutshell Just explaining what's going on there and if any Action is being taken on the part of the Devs to do anything because it's pretty unstable right now is what I've been hearing and Additionally, if that's actually if it's actually really needed for it to be stable for future Testing for metropolis or other things we're doing Still kind of work Regarding the main chain We On a stable Yet again a huge So I'm guessing we have an actor who's actively trying to So at this point And the problem is that he actually so when he reached the blocks It's like I'm billion and Well, actually those nine billion gas is used that it's a considerable memory for on the notes So it actually can't take about two gigabytes of memory to just process a single block worth of basically to do the huge step huge state so what we're trying basically I Not sure whether I fully agree with the parody approach in doing a software And like placing a gas limit on top of basically the forked and I'm not sure what that limit is So that's possibly a Solution that we can follow basically just release any version of gas or at least the developed version Which which has basically bands the current Spants chains and just to correct it to some same state and that's thinking on whatever The next best is to chance it will be Arkady, do you have any comment on that? Well It's not like we're pushing it We have not built it in the client yet. It's just as I thought it users can Use and we were to a previous statement Yeah, so the unfortunate thing from our perspective at least is that apparently apparently has the configurability to limit the the blog estimates via config files to hard limit As far as I know other client has at least get does not have such a feature to The configurability feature to do to change protocol parameters at least this parameter So it means that we if you want to follow on this path behind we do need to do some some development and releasing it's not something too hard, but It's not something that we can just release in an hour So this Parameter that's configurable. How is it normally decided? Is it just hard-coded or is it something that? Is adjusted by miners or users have to go in manually generally So Really Yeah, I think that's a good suggestion because I think the problem with trying to Push gas so it's back into line by itself is that in order to make a guess on what attack would be a pepper has to have over 51% anyway and If we close down as one-fifth one percent attack I knew that any attacker wants to keep scrolling with us then the attack or me will just watch more traditional So My needs so might not push I So basically most of the clients have some sort of Reorg protection so that you don't get to react to large blocks or to do deeply Basically guess doesn't allow you to reorg more than 60,000 lots And I'm not sure what I think but apparently has the limit to that Basically, that's real deeper than the pruning limit, which is whatever I don't know Thousand lots maybe so it's the problem is that in order for us to actually dump all this crap we need people to think from a Previous point now for guess I can do this In guess we have a good concept of bad loss So in a race we can add a bad lock and then if the client sees that this chain contains that bad lock it will just delete everything afterwards and start thinking from that point and I'm currently working on a Minor modification so that fasting can be received so I can basically Delete everything even if these are all fasting and then it can't be that actually proceed, you know Decent way without having to manually walk around so I can do that That I guess we could try to a new chain We still would need to do manual release in order to force nice to Basically to dump one of their chains Okay, so Arkady. Is there any strong conviction to maintain the fix the way you guys implemented it or would it be potential like a possibility to get in a chat with the rest of the Client devs and see about this alternate way of coming to a consensus on which Chain to dump and which block to couldn't continue on Sure, I mean it doesn't matter much We're fine with whatever makes sense, but that particular was it was Before the bulk of the attack has started it still has some state blow into it in there but not as much as What's that in the phone? Okay, and this Peter the approach you were talking about does it How much would it actually? Make some transactions that have occurred and valid to the point where Like test dapps would lose some of their state and some of their changes that you know, they've pushed to the network Well Okay, sounds good or I mean it's not ideal obviously, but Good that we have clarity on that so I'm not sure if there's any way to avoid that except to I guess pick the most ease chain or For crops and which would be complicated in itself. I'm guessing so one thing that we can do is we could try to stick with a Stick the factory chain because that they seem to have figured out a nice block number to fork from and Maintain a decent So a decent chain, so I'm fine with with continuing on that one I think the question is how can we ensure that all clients end up somehow on that chain and also it would be nice to then Figure out how to remove the block limit because I personally am not really and Feels weird that we have a block on it on that but still It's you know to remove the block and it would be nice to know how we can protect against Yeah, I would imagine it would be more difficult to Provide an incentive on the test net at least for that to happen. I guess it wouldn't be impossible. It would just be getting People to combat it with their mining powers the only way I can think of Anyone else have thoughts on that metallic Who's He he actually offered me personally that if He can afford to sacrifice a few of his GPUs to just try to push whatever change what awesome Okay, well, that's pretty cool then Yeah, I think that's a good idea then so let's go ahead and take this this wouldn't be an EIP obviously We can take this into the all-core devs channel on getter and coordinate with all the clients and actually Jan or Anton or any other clients that are in here. Do y'all have any opinions on this or is there gonna be any issue? Implementing this on your side So one possible solution is that So one workable solution is still that we try to push Push one chain that we pick as Being the canonical chain and make it heavier than whatever else is out there And then all clients would automatically sink to that without having to do any modifications So that's also possible. So I think if parity and geth can agree and can somehow sink to the same chain correctly Then the remainder of the clients can Automatically think if the chain becomes longer I think Important clients to ensure that they sink correctly because both get apparently have Hashing power Yeah, I would say the top the most used clients for that would be geth parity and ethereum j Just because I think Harmony might use that test net is the default. I'm not positive So, but yeah, it sounds like because Anton said that it wouldn't be too hard for them to do something like that We can just coordinate this in the gether channel. So any other comments on this? Okay, great. I also saw in the last 24 hours. This is item number three by the way There was some discussion about Miners not Or minors using the old recommendation for gas price Can anyone give kind of a quick synopsis on that and what's been done? sure, so Basically the idea here is that there are The Somewhere around and Some people are basically concerned because they expect that if Given that the price of ether has doubled over the last couple of weeks the Gas price should go down to compensate. So the transaction fees don't get too high so the The challenge is though is that while the idea that get Is Area in the in in the current environments the situation is a bit more complicated because the basically the only reason why that minors have right now To not include transaction. Is it because they want to have space for others transactions or the gas limit? It's because they're afraid of Increasing the the probability that their block is going to be a little red So this is something that a lot of minors started taking more seriously back during the idea. What's the tax? and but the basic principle is that if you're Blocks that are large or And therefore more likely to not get propagated in Or it's only with their mongols that's how it means about this and Actually Peter Pratt here from ether expand made a couple more regressions today and he discovered that the Currently We Gas prices in gas price where the revenue they get from fees Matches up with With the cost that they have to pay for from the increase don't go right on these The equilibrium there is around 19 channel, right? Oh, sorry, that's Peter pressure from my Okay, if you change the statistics such gas price, I can actually get Paste this into the TSG chat room there it is Okay, so right now. I think it yeah, so they so there's if we want to have a Or if we want the gas price to like go lower then They Aside from some constants would be to drop that to look at the way We should guys we should assume all it actually makes the Miners much less willing to put transactions on the otherwise would be Um Yeah, sometimes your connection gets a little bit slow, but it just It just it doesn't actually make you go away. It just slows down. You're speaking for a second. So you're still somewhat Thank you Yeah There is yeah Yeah, so basically the challenge is that if you look at the average It's around three point four But like if It's perfectly efficient it would be all the way up to four point three seven and five And that would actually reduce the So I remember that some developers started looking looked briefly at why this was happening And I think basically the problem ended up being that there's a bunch of situations where when clients receive new blocks They end up kind of not adjusting the not adjusting their work quickly enough. So the old goals have a Or they Basically don't add Until a block at a block after when they should when they should be at an uncle's into the into the pool the pool of uncles that they're trying to create blocks with so like if What basically if we want these to go down then that looking at looking at that would be one of the simplest or One of the easiest approaches aside from this The only other thing would be as I said a hard work to adjust some constants to try to Incentivize miners to all push more transactions on but I don't think it's At the we're at the stage where that's needed So the non-hard fork solution would that just be I guess would that be considered a soft fork or just an education? It's not even a fork It's a strategy change in the clients and in fact It's a strategy changing the clients that miners should support because it would give them more revenue. Ah Okay, so it sounds like it's a dual changing the clients changing how they Yeah, some of the stuff for their defaults and then Having something that shows miners the The fact that it's optimum and part of what I was thinking is maybe I know that there's for the Bitcoin community There's certain sites that show like the optimum gas or the optimum transaction fee in order to get your Transaction on the network. I wonder if there can be some kind of Very basic like one-page site that shows the optimum strategy for miners to make the most Money at the time using a form Oh Great awesome No, what's the thing is like that's their optimal gas price right the optimal Inclusion strategy is something that would have to be coded into the clients. It's not something that you've got a set Got it. Okay. That's different than yeah, one of the parameters they can set. Okay, so yeah Let's let's start with parody parody. Do you have any thoughts on that? Is that something that could be agreed upon and then changed? Well, yeah, sure Yeah, so as for the gas price by default we have to the Is it right in USD but as far as I know major pool just don't use defaults. They don't right Okay, sounds good, and then a go team Not entirely sure what to change but in tail we can change whatever so it should be fine If there's a clear thing what to change just to react a bit to the Parities dollar approach. I think the problem is all of the problems that I've seen that work out is that even it minors current so The problem with minors override the defaults and granted they are they do then That average notes do run with defaults, which are below what miners say and it might happen that the network starts to fill up That does that just circulate over all over the place without actually any minor including it So that's that's my main issue with With dynamically adjusting as prices That somehow average knows need to adjust to what my nurse Do not to what the defaults are Yeah One of the general topic. I just wanted to make one suggest Actually ask maybe this already exists, but if not make this suggestion Which is that aside from just being able to choose the gas price in terms of the number of way Maybe make it possible to choose the gas price, but either even like in what 3.js as Either a percentile or as a multiple of the average of the average that gets included or something similar The idea would be that that would just make it easier to pay more to get included quickly Hmm, is there anyone from web 3.js in here that can comment on that? I guess it would be Alex or Martin Experience without making a DOS situation worse. So, on the third point, my current instinct is that I think first of all, regardless of whether fees are low or high, like there is a risk that attackers are going to be willing to just make full blocks and just fill out full blocks forever, whether that's an attempt to make the blockchain not work well or whether it's an attempt to blow up the state. I think if it's hopefully, once we get, or if we're comfortable that all of the denial service issues are fixed, then theoretically, even if blocks are being 100% spammed the whole time, that's something that nodes should theoretically be able to tolerate, but in terms of state-sized growth, I think that's in the area where if we want to mitigate it, then we have to rebalance all food gas prices again. So it sounds like there's not a quick fix necessarily, but as long as the cost versus benefit is on the benefit side for enabling people to, I guess, have more freedom to choose, I think I'm following that correctly, then that would, or to choose the gas price when they're actually sending their transaction that can alleviate some of the issues that people are having. Okay, so on that area, I was wearing some desks and I missed, but I wasn't wondering that. Oh, Alex, could you get closer to your mic? Yep, we can hear you now if you could start over. Okay, so I ran some statistics in this, and over the last 10 blocks of all transactions were included, over 75% of them were exactly 20% and less than 1% were free and only above that they are kind of shamed a bit, so basically that means that most people, even though the team may be included, even though we missed, we were including the slider, most people don't change it to make it larger, and if you put it smaller than 20 channel, the transaction will never get accepted, so most people do not change the default slider. Okay, I mean, if it's not too pressing, then that's fine, but it's still nice to have. Yeah, I definitely agree that when the price rises, the idea that the transaction cost should go down is something that most people expect. Yep, so yeah, that would be, I guess, a suggestion for the Web3 team, right? For the Web3 team? Sorry, why the Web3 team? Oh, okay, maybe I'm not following this correctly, but I thought it was the user being able to, or I guess they already can't adjust that and missed, never mind. Yeah, they can adjust that and missed, but if you put it lower than 20... Okay, so yeah, if everyone else doesn't think it's a pressing issue, it doesn't need to be resolved today necessarily, so any other comments? So I do think it's a semi pressing issue, at least we don't have to solve it now, but if we don't have to solve it, rather in the short term than the long term. So basically, if you can figure out how to prevent an echo from being lost offline, I'm fine with whatever solution. Peter, can you just clarify specifically concerning both doses in terms of kind of short term impacts, so like pushing local rates very high, making it hard for nodes to stay safe, or are you talking, thinking about more long term doses that try to blow up the state to make it really big over the course of a couple of months? Actually, I wasn't talking about the blockchain, I was talking about the network itself, so... Okay, okay, I agree that's something that probably needs to be studied much more. So, I think last Christmas, I mean 2014, or 15 Christmas, there was an issue, at work that we were probably, every node was just sending around tens of thousands or hundreds of thousands of transactions, and it was actually slow, and it fixed that issue, but it was kind of an eye-opener that a lot of things can go really wrong if you allow basically transactions, if you allow transactions to simply the network that will never, ever be included. So, it's not something that we should likely include. Yeah, so just to clarify, the problem is that even though parity miners, if they use their defaults, they'll mine, you know, they'll accept transactions at 10 Shannon, with gas prices at 10 Shannon, the gas nodes won't relay those transactions, so effectively, you can't broadcast a transaction with less than 20 Shannon because of gas relay policy. So, Peter, is that something that sounds accurate for the way Geth operates on default? Yeah, that's actually how it operates, but that's also kind of my point, that if you start circulating transactions that won't be included by any miner, then you will be able to do that work. Okay, so yeah, unless there's any more comment, it sounds like this needs more study, so... For example, one thing that we could try to do is subtract all the approaches to tie the gas prices to the dollar value, although I really find that a horrible idea from a centralization point of view, because it means you just call an API of one of the exchanges and hope that it won't fuck your entire network up. So that's my biggest issue with it, because you're kind of placing the whole fee market in the hands of an exchange. But if you can tie it in a more or less reasonable way to the dollar value, then maybe at least miners, but probably there will be some miners who won't manually change it, so maybe they will at least... I don't know, maybe we can try to play around with it. The problem is you won't see the negative effect soon, and by the time you see the negative effect, it's too late to fix it. Oh, Arkady, on the parity client, is there any, I guess, like plan if the exchanges or whatever data source is manipulated? Is there multiple data sources that are pulling for that US dollar price? Currently there's just no data source, but it's checked for being sane, I think. Yeah, I was overrated. Okay. Yeah, the second is really... So if given that if a price is around $20 currently, I would say that between $5 and $100, everything would kind of look sane, but it could really impact the network a lot. Hmm, okay. So there's a few different... Sounds like there's a few different suggestions. Peter, if you could just do just a summary write-up of kind of what we talked about on the issue, we can circulate that for more discussion and see if we can't come up with a short to intermediate term solution. Unless it's something more pressing, I don't exactly have a grasp on how pressing or how tight it is to the price, so... Yeah, I'll try to summarize some stuff, at least voice by concern so that other people can also read through it. And then we can figure it out. Okay, great. Yeah, if you'd write that up and send it to me, I'll circulate it on the appropriate channels for Parity, EthereumJ, Ruby, all the other ones, and maybe even Reddit if appropriate. I think I already have something that I got to turn up after. Oh, perfect. Okay, any other comments on that? Just add that the transaction relay policy is important to... I think it's worth an EIP to specify the networking layer, the transaction relay policy, because right now the abstraction of transaction origin or the account abstraction, it won't work if guest nodes don't relay transactions with zero gas price. So is that something... I'll go ahead. So let's suppose that we start relaying transactions with zero gas price and I start firing out contract calls from a million different accounts all at the same time. Since it's zero gas price, it means that they are valid. What happened on the network? So that's one of the, I guess, spam DOS vectors that you were referring to earlier? Yeah, so the problem is that I can basically turn out infinite number of transactions that are completely valid, yet they will never ever be processed. And the problem is that given zero gas price, I can actually create brand new transactions. I can just create them as fast as I can and shove them into the network at different nodes and they will start propagating it. And even if they drop some of them, I don't care as long as they propagate something, the network will start getting sluggish. Okay, that does sound like a concern. Okay, so yeah, any other comments? Otherwise it sounds like Peter's going to summarize all of this since it's a non-trivial but, you know, can be solved issue. Oh, Daniel, your microphone's not working. So Daniel, your microphone just went haywire and sounded like a robot who was screaming for dear life. If you could repeat what you said and just look at chat, we'll tell you, or the chat in Google Hangouts, we'll tell you if it does it again, if you can adjust anything. And it might be a combination of your microphone volume, input volume, and how close you are to it. Okay, is it any better now? Oh, it's perfect now. So if you could just repeat what you said. Okay, so I said that my only comment was that it is not necessary, the solution to these dust problems is not necessarily a hard cut of policy of whether to forward transactions or not forward, but it might also be solved by prioritization. That if a node is having many transactions to choose from, then it might choose which transactions to forward first. That might also mitigate dust attacks, as described by Peter. That was my whole comment. Yeah, that's true. The problems start happening when, for example, an attacker would try to propagate transactions that cost a lot of money, yet they will not be included for some reason, and they actually might prioritize out legitimate transactions. So there's, for example, I created a transaction that requires 4 million gas, but costs little attention, and then a shit ton of subsequent transactions that are really, really, really expensive. But since each of them would consume a full block, or maybe I'd create 10 blocks worth of transactions very cheaply, and then a ton of really, really expensive transactions, and then everybody would be spinning around the expensive transactions, whereas they won't get included anytime soon. And the legitimate transactions get starved. I'm sure it's possible to solve it, I'm just saying that it's not an obvious solution. Okay, so yeah, it could even potentially be a combination of some of the things you talked about and optimizing what gets relayed. So yeah, and I guess you have to also judge if these rules or these changes would even be followed or if they can be made in a way that, you know, everyone cooperates for the best interest of either themselves or the network. So yeah, this is an interesting problem. Yeah, any other comments on this? So yeah, the last item that's officially on the agenda, and just by the way, before we go into this fourth item, is there any other ones that didn't make it on the agenda that someone wanted to cover after item four? No, okay. So we're going to go ahead with agenda item four, which would be the Metropolis EIPs. So from my perspective of where we're at with that, there's going to be some, there's some people who are formalizing them in PRs. The EIP editor team has been going in and providing suggestions or getting them to conclusion as far as making sure that they're syntactically correct by the EIP standard and that there isn't any glaring errors. Does anyone else have comments? I know Vitalik has written a good majority of them or maybe all of them. So you can go ahead with your summary. I mean, I don't think I have any serious new updates on any of the Metropolis EIPs. Well, I'm working with Christian on finalizing some of the privacy related ones, going to be working with Jesse and implementing them. So that would be an optical appearance. Aside from that, I don't think I've had any new insights since the last fall. Okay. About a month ago, I know Martin Holswende had brought up the issue of the difficulty bomb. Is there an update on that as far as the network changes that may have prolonged that or if there is something that needs to be planned and what time frame? Actually speaking, I think it all has to be prolonged. So I did the math and I actually can just redo the math right now. Hold on. This is just on what the high stage is going to look like. Yeah. Because I feel like there should be somewhat of at least an emergency mitigation strategy if needed at minimum but also just kind of thinking about if anything needs to be done. Okay there. So I ran my script with the latest data and it looks like at the end of March the block time is going to be around 14.6 seconds. At the end of June, it's going to go up to what, it'll be at about 20 to 21 seconds and at the end of August, it'll be 32 seconds. Okay. As far as mitigation to go, I think for example, is that in a difficulty calculation function, we would basically add to which you were, let's say if the block number is less than four million, then use that. If it's the block number is between four, let's say four and six million, then use the block number minus two million. So we would just add a kind of artificial discontinuity. Okay. And that's something that would hold up all the way until the potential transfer from proof of work to proof of stake. Yes. Yes. So an update on the transfer to proof of stake by the way is that with my latest kind of thinking and if you want more details, like I wrote a bunch of medium posts and I'm going to keep writing more, is that it does seem like there actually might be a possibility of doing a kind of slow transition. So where we would go to switch from proof of work to proof of work with proof of stake running on top, who that doesn't really mean anything to hybrid proof of stake to kind of more full proof of stake over time. Even though I'm implementing 100% of that roadmap might take longer, so if I could see it taking more than I hear, we would get to some of the earlier stages, possibly ahead of schedule. So that's just the information to think about. Yeah. I've recently heard an interview with Rick Dudley. I think that's how you say his name, where he was mentioning the, I guess the certainty that there would be a time in the transition process where proof of work and proof of stake would be running alongside each other and also figuring out, I guess, how the EVMs are going to, I guess, work beside each other or if there's complete compatibility, complete compatibility with EVM one versus the EVM two. So yeah, my question basically is the, for the POW and POS running alongside each other, would that be something that is still a little bit unexplored or is it pretty much known that through these transitions that it's going to take months to kind of decouple the two? So what do you mean by decoupling? So running them at the same time at what? Okay, so it's not going to be two blockchains, right? Oh no, not two blockchains, two, so proof of work and proof of stake at the same time. Yeah, no problem. Yeah, so the idea would basically be, right, that we could first implement the proof of stake part very conservatively so that the proof of stake would basically only finalize blocks after proof of work as like de facto finalized them anyway. Then over the course of a couple of months, we would implement clients that would favor the proof of stake four choice rule. Then we would probably cut down the proof of work block reward increased at a proof of stake validator reward. Then we would implement some hyper proof of stake scheme that would let us kind of parameterize between proof of work and proof of stake on a zero to 100 scale. And then we would just implement that. And so the first part would not even be a hard fork, it would be like a very weird kind of soft fork. And the second part would be a hard fork, but it could be one that kind of eases into the new version over the course of maybe a hundred days or something. Okay, so it's just more of an issue of implementation coordination once the research team, I guess somewhat officially wraps up the process on exactly how that can go. Okay, sounds good. So by the way, stage one is already being implemented in the floor of the Python 9. That talks to just the rule of RPC. Oh, okay. So is that something that's also in any other clients or just, oh, implemented, not actually pushed? So theoretically, we didn't talk to any client with RPC. Oh, I see. So my instinct would be to keep it pure Python for now because it's still kind of in iteration mode. But once it settles, then it should probably be implemented into clients directly. Okay, and the timeline for that is post metropolis, it sounds like? Yeah, yeah. For implementing it into the seven clients is post metropolis. For doing it in Python, it will happen in parallel. We can keep doing it in parallel. Okay, sounds good. So I guess if I were to create a summary on the metropolis stuff, the EIPs are seeming to go well. There is some discussion on them. We're going to continue to monitor the PRs and the editors are going to look to make sure those are moving along and don't get stalled as far as discussion and other things are concerned. And then, oh, finally, when is there a need to actually put a block number down for metropolis? It sounds like the last time we talked, or actually two meetings ago, we were talking about making it a fairly conservative number to allow a ton of time for testing and to have less rushed hard fork in order to avoid any issues we've had previously. A less rushed hard fork. Can you do metropolis? Yeah, when you actually do metropolis. Homestead wasn't rushed either, but like some of the recent hard forks, there's a cadence of, you know, having a hard fork performed within a month just because of having to mitigate attacks and other things. So I guess this one... Sorry, I didn't quite control that. Yeah, so I was just saying the cadence of recent hard forks has been rushed. So Jeff implied two meetings ago that we should be given plenty of time. So is there any comments on how much time that should be, or if there is a need to actually set kind of a rough time in the next couple of weeks or something to make to decide on the block number or things like that? So I don't think we should be deciding on the block number until the C++ client has created tests and until all clients have taken the test. The thing that we can decide is the week time. So the time between I recommend making a starting to go longer for that. So maybe go up to something like three weeks. Okay, yeah, that sounds reasonable. Or three weeks pending any issues that may arise that would... Yeah. Okay, cool. Yep, that sounds good. Any other comments on the Metropolis EIPs? Is that a quite specific question about the account abstraction? Is it the case that if we have this special zero signature, and the sender is set to minus one in the air quotes for all? Yes. Yeah, okay. So anyone can spend money that might be there for example? Yes. Okay. All right, sounds good. Any other comments on Metropolis? It could be good to have an idea of when we're shooting for, to do the fork, whether it's in August, would it be okay to do it when blocks are, the block time is 30 seconds, or would we prefer it closer to, you know, 15 or 20 seconds? My personal preference would be end of June, but... Hmm, so yeah, and that kind of sounds like it's dependent on the implementations and the discussions over what's going on. You get the implementation longer than the weekend on, than the lobby speed up happening a bit longer. Okay. There can, I guess, I agree with Casey though, there needs to be, I guess, some sort of at least assurance that there is a timeframe we're shooting for in order for the network, block time to not go up to an unsatisfactory level for the majority of the, or a level that the majority of the community would think is an unsatisfactory level. So... One concern is that if the block times get too high, then a lot of hash power might switch to, you know, a more profitable chain. I would argue that at this point, Ether is just like so far ahead of other cryptocurrencies in terms of dollars per second, that I'd argue the number of people, the number of risk of that is actually lower than the extent to which it made people happy by temporary reducing the inflation rate. Sounds good. I actually did the map. Back with Ether, I always had $5, I would have definitely suggested much more caution, but, and sorry, back when Ether was at $5 and Zcash was at $50, but now that Ether is at $19 and Zcash is at $40, it's actually, yeah, the concern is that it's further away. Okay. And if there is a change such that somehow we get back to the state you were, like, are the conditions not necessarily the price of $5, but other conditions in the cryptocurrency ecosystem, would this be able to be the timeframe adjusted and we feel safe enough to do it by, would it still be kind of shooting for end of June, or would it need to be pushed up quicker for Casey's concerns? The end of June, it should be unobjectionable, because 20 seconds versus 14.2 isn't that much. But once we start talking about 14.2 versus 30, that's a different issue. So, if under those conditions, yes, we'd have to probably push Metropolis up ahead and kick out a future or two. Okay. Sounds good. So, yeah, we can cross that bridge when we come to it, it sounds like, and we'll just be monitoring that over the next few all-core dev calls. Any other Metropolis-related comments? Christian, I wanted to ask, has there been progress on implementing these in C++ and does it look like we're close to having tests for them? The blocker currently is integrating Lipsnock properly. The implementation should be 100% finished. Okay. Dematernal restarting tests. Perfect. In that case, I'd recommend starting tests for everything except for the Snark stuff. So, sorry. No. So, the Snark stuff is implemented, the rest not yet. What was the question? Oh, yeah. I was asking about all the Metropolis features other than the Snark stuff. No, okay. Sorry. We started work on the other stuff and the Snark stuff is most familiar. Okay. Got it. Thank you. Okay. Cool. Any other clients wanting to comment on their progress on some of the implementation? Arkady, I saw the Parity team, I think was in the process of at least doing initial implementations of 86? Yeah, that's right. Mostly we are waiting on the tests. That's the most present concern. So, concerning 86, does our current test infrastructure allow for these tests? Because, I mean, when transactions come in Y block, they should all be valid, right? So, what I'm saying is we don't have test infrastructure to test the mining strategy, right? Right. And, yeah, I'm having to comment on that at all. All right. Sounds good. Okay. So, we're going to test and then that'll help with testing, but it sounds like we're still at least finalizing some of the specs. So, that can definitely happen in parallel. And in general, if we want to test mining strategy, then one thing you could do is you could just put your miner up on Robston and on Robston check if you create blocks and make sure that the emperor works that at least for uncles that you're including are at 4.375. Okay. So, that can even be coordinated if we want beforehand, if we just figure out the Robston miners, if we can just ask them to do that. I mean, well, this is a strategy thing, so it doesn't really need client coordination. Oh, okay. Oh, so, yeah, that's right. Cool. And then, yeah, I think, is there anything else metropolis-wise anybody had a comment on? Can't think of anything. Cool. A quick shout out to Yoichi's work. He just released a blog post on formally, I guess, formally verifying some of the, I guess, it looked like it was putting it like graphically some of the things that are coming up for proof of stake and some of the stuff the research team has been doing. Yeah. Yeah. So, that was the Vitalik's version of Kaspar. And actually, this week, Bratz Kzamiya came to the building office and now Bratz and I are doing something similar for Bratz Kaspar. And I think, I mean, now I'm explaining everything to these machines. So, now, I think this helps explaining things to people as well. So, yeah, clarifying those little things. Awesome. Sounds good. Yeah, good job. And last point is that the EIP editors are currently meeting periodically to go through EIPs. Again, if anyone in here has an EIP, especially one that is mid-2016 or older, if you could go in and update them to say, this is something that I still want to do, this can be closed or close it yourself, that would be incredibly helpful because we are crunching through, I think, 156 or so EIPs that are in various states of abandonment or just like some in progress stuff, including stuff from 2015. So, yeah, if you could go in there and do that, that'd be very helpful. And then we're also checking the PRs fairly frequently every few weeks to make sure that those kind of stay in line. So, yeah, I think other than that, is there any other comments from anybody? Yes, just a quick question. In the last meeting, Robert proposed semi-formal EIP that we should dump the ETH underscore compile RPC endpoints. And I think it kind of makes sense. And most of the people agreed. I think he also proposed an EIP. And more as everybody said that it's a good idea. So, as far as I look at it, you're already merged in PR to delete it. We also have an open PR. My question is, is this something that we're going to do with? And if yes, then probably you can just accept it as an EIP, accept the EIP or something. Unless there's a contradiction. So, the most important thing to watch out here is to update the tutorials accordingly, right? Yeah, I would agree. Yeah, so that's a valid point. And would this then be handled by the group that makes the DAP frameworks, I guess, as far as once you take it away? So, as Robert wrote, I think Robert also pointed out on the issue, I mean the EIP proposal, that the problem is that nobody really uses these endpoints anymore because, for example, many people want to select which version of syllabus they want to compile. It's not possible to do that. Then some, it's not really, so I don't think people heavily depend on these kind of things. Maybe you can shout out on Reddit to just ask whether somebody's actually using it. But it's actually doesn't really make any sense to use the underscore compile syllabus versus just using sol-c directly. Okay. So, I hear a lot of people coming in asking about how to use it, so that it doesn't work basically. And I guess it's all beginners who are trying to follow the tutorials. Okay, so updating the tutorials would need to be something that is looked at. And right now the tutorials or at least the ones that the foundation has more or less produced are pretty outdated themselves, so that's a whole different issue. But inside that EIP, we can kind of discuss how to move forward with that in the best way. Actually, I'm not sure if you guys have some tutorial page for Solidity, because I know that, for example, the Go Theory and Wikipages, somebody sometime two and a half years ago, maybe Victor, he wrote some Solidity tutorials that are partly outdated. And in all honesty, I don't really feel that Solidity tutorials should be in the Go Theory and Wikipage, because they have nothing to do with Go Theory. And honestly, I don't think it's something that we as the Go team would like to maintain. But nonetheless, people do find it, and then people come nagging us that it's outdated. And yeah, it sounds like it's outdated, but we don't want to maintain Solidity tutorials under us. I actually was not aware that there is a tutorial on the Keith Wiki. I was mainly talking about the tutorials that are on the Ethereum.org website. So I think that it would be a really nice thing if we could maybe have a more organized approach, but maybe an organized effort to try and maybe create a tutorial page that we can just link to, and that at least we can maintain only a single place. Yeah, ethdox.org was our initial attempt at that, and that's kind of fallen as not a priority compared to some of the other stuff the maintainers were doing. So I might do a shout out on Reddit to see if anyone wants to pick up on that. And anyone here, the ethdox.org, there's a repository in the Ethereum GitHub account that's called like homestead docs. That really needs to be updated to something more generic, but I hopefully will have time soon to do that. There's a few things more pressing, because I was one of the main maintainers. The other ones were Bob Summerwell and Victor Tron, and I know Victor's been all in on Swarm and stuff lately, so I know he can't maintain it. So it's more of an issue on that front of finding someone to maintain it and keep up with the changes. So I'll see if anyone in the community can take that on, along with maybe someone who works on a core client just helping with their piece of that. And yeah, so that's pretty much the deal with that, and I'll comment on the EIP to see how we can move that along as far as a plan and what type of EIP it should be. Cool. Is there any other comments? Great. All right, I think we're good here. I'm going to upload this recording in the next few days. I'm behind on uploading the recordings, but that should be fixed pretty soon. I'll have 10 and 11 up. So thanks, everybody. We'll have another meeting, I guess, in two weeks, which would be the third Friday of the month. We're on a cadence now of the first and third Fridays every month. So thank you, everybody. Have a good weekend. Bye-bye.