 Check check Oh Casey, can you hear me? I didn't see you come in Yeah, I hear you Excellent. All right It looks like we're up. Welcome everybody Here is a link to the agenda Resting serious face The first item subtle teas or questions we need to work out Yoichi you were you put something in about EIP 96 Is that still Warranting any discussion? so when I saw into the Pre-request to contain the three different programs three different hex codes and One of them should be deployed Well So one is the need to code. I don't have to think about this, but there are two different Code in the pre-request. I think I know which one is the right one, but the other one should be removed I think Let's okay. Hold on. I'm gonna check myself so You each year you're talking about the like word says block hash contract code is set to above a blind then down below Would you it says something about the evm bytecode in the evm init code? Exactly exactly Okay, hold on. So let me just I'm gonna Compile this serpent and see what that gives you and while the talax looking that up. Is there any other? implementations that need to be checked. Is there a Solidity one No, this is the only implementation Probably have some comment there about being assigned comparison. Okay. Yeah, so the em code at the top is just like Something old incorrect, and I'm just gonna ignore that or I'm gonna edit that rather I'll do this right now. Thank you Okay, great. I think that resolves that right That's sure. It's a minor editorial issue Great. Okay. The next item is EIP 603 Matthew de Ferrante Has a comment in the agenda that kind of explains his reasoning for implementing a different curve So oh of a talk we're gonna say something Yeah, I was just gonna say that I'm Increasing and I Am more a bit more in favor of this than I was last time basically just because of seeing that we Don't have or just seeing how inefficient the other curve is So Yeah, let's have let's have Matt who's on the line explain it to everybody and then we can We can go through the pros and cons, but it does it does seem like a neat EIP. So Matt if you want to go ahead Yeah, sure. Thanks. Can you hear me? Yep Okay, cool. I mean, yes I mean the the EIP itself is just kind of like a copy of the other one, but the comment goes into the detail I mean generally for me the The biggest reasons are the fact that it's like widely supported widely implemented curve and people don't want me to roll their own You know implementations of things going forward. I mean, you know, like Albion is relatively new And yeah, it's inefficient if you use the alts one, right? The the real efficient one version of that curve requires like he bet execution Which is the stable on the vast majority of things And beyond that like, yeah, you know, it's the ability to check Manipulate curve points on on nature transactions and the fact that like tons of other coins use native Like sec p256 sec p256 k1 is native, you know, like allows you to do nice pegs Between chains on that and I mean the biggest the biggest thing I see is that like, you know using That curve securely is is less trivial than using sec p266 k1, right? There's only one group on that curve and you can't you can't really mess it up as if you know if you use if you just use like the normal core operations or as whereas on on the Alt BN on BN 256 anyway the curve there's two groups They have slightly different properties if you choose the wrong one you break the CDH is easy If you don't realize that there's two groups, you know You there's way more mistakes there like, you know, generally the strongest sport is the community sport and the fact that so much sports 256 k1 and it's way way faster, but you know, so that's my my rationale and You know, it seems silly to add a curve and not also add the curve that you know, it's that is the one supported natively Mm-hmm. I Don't have any actually don't have anything to say that I haven't said before Well, is this the same curve that's in Bitcoin? Yes, okay Yeah, I mean this seems like a good idea. Does anyone have feedback or any opposition or questions? Not to say I'm gonna accept it immediately. Just like like this is obviously not going not going into by saying to him but but Let's see there is Okay, one just question I have is the how easy like for if anyone's in here has experience with the secp libraries So like the stuff that people that we already use for the easy recover How easy or difficult is it to call specifically the the parts that you will look to curve adding and multiplying? So I've implemented this and guess like, you know on my own and it's it's very easy Yeah, it's like five lines five lines of code for off code really, okay And you're thinking that'd be the same across different bindings for different languages Yeah, because so there's two different ways to add this to go. There's you can use the you know The C bindings which again, those are you know portable across languages. There's also the BTC EC which is Yeah, which is also like a pure go implementation if you know if you want to use that for parity as well I mean it has the same it exposes the same bindings is easy recover kind of requires Like at that level of acts the bindings anyway Okay, any other feedback? Hmm. What I'm thinking is if there's no opposition. This will be something that'll come up in a future a Future core dev meeting potentially the next few And we'll make sure that that EIP is up to standards have one of the editors review it And then re approach this for official approval sometime in the future after Everyone's been given a few weeks to comment on the actual EIP PR since this wouldn't be going in till constant snowfall anyway Mm-hmm Sounds good to me. I think I might have heard someone speak up. Was there anyone else? No, okay Great The next thing we have. Oh, I think yoichi dropped off for a second Google's been having some problems this morning I think they updated YouTube and dropped everybody who was logged on to it at like eight fifty eight at least all my screens refreshed So maybe that's what's happening Updates to testing So this would be Martin Jared Casey and yoichi if he comes back Martin what is the summary for testing unless someone else wants to take that so Oh, we cheat back as well. Yeah, I was also kicked out a bit there Yeah, you should we were just going over testing and we were saying if anyone wants to give kind of an overall update I'm not sure Who's been most active in it because I don't think Demetri's here right now to give a solid up or to give like a full summary Okay, so Now on high we see better numbers For boys and we have this done 10 errors. That's nice for CPPS and we have seen somehow 700 Errors, but I cannot reproduce them locally. So It might be just some Prequest to merge the in CPP but not much the tests or vice versa or something like that. That could be the reason I Haven't been able to reproduce this and then I'm rich. I started to receive many So many people pointed out mistakes in tests in this week. So that's a good sign. I think It's getting to CPP team against everybody else. So we are getting busy up Okay, great so the next thing would be and basically agenda items BC and Well, what's coming up as C part 2 are About gas prices and client implementations So I think unless someone has a better idea We'll just go with the client updates first for implementation of VIPs and then go through the gas price Gas pricing for the op codes. So we'll start with the geth It looks like they're done with all their EIPs Peter. Do you have an update? Yeah, so we are the same place as two weeks ago So essentially we're mostly done as far as I know all new tests that have been added are still passing Then the only eight failing tests that we currently have I need to investigate it But there are some known issues with some tests So I'm I'm not sure whether these eight failures are because the tests are wrong or something else is wrong But apparently we are mostly done. So we're just trying to look at Sure parody, I think that would be Arkady Yeah, I'm also working on Fixing test issues. I think there is only a couple of tests state tests that are currently not passing Also need to implement blockchain tests. I think we'll be ready in two or three days Okay, I think I saw yo each year Martin mentioned something about the parody test or Martin you were just speaking up go ahead Yeah, so I'm wondering if it's possible if it's Have you isn't disabled now so that it's not possible to configure parity to run on business in rules? Because it's failing all the percent in tests and hi. Yeah, you would need to use a branch that we have on it Okay, I'll send you branch name Perfect. Thanks. Okay, great Next we have CPP aetherium Who's in here, I know that Christian couldn't make it It looks like or Yoichi actually yeah, Yoichi, I think you might be up to date on that No, Andre. Oh Andre. Sorry Andre. Go ahead ahead Oh Well, it's actually Yoichi mostly taking care of Byzantium updates now in CPT So I know that there was several minor fixes during these two weeks. So yeah No much details. Yeah, I'm looking at the meta implementation progress PR and the CPP aetherium repo and it looks like 86 has a few 100 has a few and 211 has a few From the Google doc that's kept up with But yeah, that doesn't seem to that doesn't seem too bad looks like Jared and Yoichi covered a lot of them Cool Let's see aetherium J And welcome Mikhail. This is his first meeting. He's from the aetherium J team Yeah, thank you guys we have implemented all the APIs and We are passing blockchains state and the transition tests Successfully and now I'm working to bring up to date We am tests and the transaction test. So we should we should be Ready in two days, I guess. I just need to check that we are passing them, too So I'm date and then bringing them to new format enough in our code So we can support new format of this those tests. So we can run and check how How it works. So that's it So I forgot is aetherium J compatible with Hive or Has been compatible with Hive some time ago when I we We are done with the test checks. I'm going to To work in Hive to make it fully compatible To make aetherium J compatible with Hive Great definitely talk to the Hive guys about that reach out if you need me to get you in a getter channel or anything like that Okay All right, thanks for the update aetherium JS. I think that would be Casey Yep. So this morning we merged the elliptic pairing pre-compiles and Almost all the state tests are passing now Thanks to the great efforts of Recently joined contributors Still have a couple well about a dozen failures to to debug and blockchain tests even are And Progress on those so expect to be passing on the blockchain tests pretty soon, too Okay, great. Thank you Let's see yellow paper any relevant updates on that yo Ichi Yes, this morning. I noticed definitely pointed out when it is missing and then that he was correct difficulty for the day and The block reward to change this was I had forgotten about it. So I added a PR about that today That's it today Okay, awesome. Thank you Pyth app of italic Not really not really any progress since so last time Okay, and I don't see any other clients on the call I need to reach out to the Haskell team I think they're the only ones that are missing that are actively kept up with me as I believe Ruby is deprecated Okay, so the next one is determining the gas prices and opcodes that was tasked last time with Arkady and Martin And I saw that Martin keeps up with a repository called benchmark and the Ethereum I'm gonna post the link to it, but Martin you can go ahead with your overall update Any issues or just the summary? Yeah, so in summary, we had a chat about this early in the week me and Casey and Vitalik And So based on the on the benchmarks There's an analysis with some suggestions and we've Made some concrete suggestions based on that And basically let me just find the analysis here so the current suggestion that we would suggest is that for Addition we would keep the current cost which is a 500 reasoning being that Even if we increase the thousand it doesn't really matter because there's a 700 coal cost which we can count in on that And so the total cost will still be 1200 So we can do it ModX the analysis suggest that we should multiply it by around 1.5 or maybe 2 but then it turned out that Both parity and CPP use a quite unoptimized version of ModX I Think Vitalik we should just multiplying it by 5 instead Is that right Vitalik? For ModX BS And my reasoning is that by multiplying it by 5 I like other people feel free to check this But I believe that the exponentiation of 4096 bit numbers by 3 would still Only cost something like 5,000 gas which is still I cannot be very high and Continuing on scale of multiplication we the analysis wound up somewhere in the range of 30 to 50k and Yeah, I think we said 40k For it to be usable and in the hope that it would But it could be further optimized in clients and for pairing case your Vitalik to you guys have any notes about what we said about pairing I remember that I Think we had decided that the one that the current schedule that was in Pyatherium So the 100,000 base and 80,000 for points is like is fine and we should just not change it Yeah, because that would leave us about factor of two away from Easy mode for for each recovery from age recover, but still with an acceptable mega gas per second For the for the hardware that we've been doing testing on Okay, so Looking at this what is left to be benchmarked or are we comfortable starting to make some decisions about the gas price for Byzantium So I would hope that we can today just decide. Yes, we'll use that scheme That's my hope to if During the coming weeks we find or someone Finds that there's another service with vulnerability With specific input vectors then we might have to go back on that in some kind of emergency rollback or change, but I Yeah Hopefully not that's hopefully not gonna happen So I don't think we need to do any more benchmarking before we make a decision. We'll make a decision and Hope we don't have to go back on it Okay, that sounds good What's the best? Oh Sorry, if there's noises that is my cat playing with a mouse that squeaks So what is the best way to organize making these decisions? Should we use the benchmarking repo? They're not just small enough to just talk about it over get her. I think we can just I mean everyone who has had an interest in these things has Been invited to see their saws read the analysis. I talked about it on all-core dev So I'm I prefer just to ask does anyone in this call object to the proposed figures and Unless someone speaks up, I think we should just put in the numbers in the EPS and Then no find all corridors that these EPS have now been updated It's implemented and then we just Roll it out in the tests See if it works Science passes or not Okay, that sounds good to me anybody have any comments or objections No, I kind of agree at this point We should just pick up pick whatever number we are currently at and think they are reasonable and then just roll with it Because it's way too big already Mm-hmm sounds great Okay, the next item on the agenda review the time estimate for testing and release As far as that goes some of the thinking behind that was that If we felt that testing was close enough to being finished or finished we could set a test net block number Maybe even today And just shoot for that so that we can stay on target for something After which as we get closer to the test net or at the start of the test net We can set a block number for main net depending on How well the test net goes we would put that in there with the caveat that if testing Or if while on test net we find a vulnerability or something we need to fix we can delay What does everyone think of that? Hmm It's fine to me. So today is September 8th In a week will be the 15th two weeks will be the 22nd I think block times are gonna hit above 30 seconds mid-month No, they're gonna I think they're gonna hit 30 seconds at the end of the month rather, but I can check again Yeah, yeah, it's gonna be it's Pretty sure it's gonna be at the end of the month But I will oblige with another Four two five one nine three six Let me just get the timestamp and they can run my the piece of code that I've run way too many times one more time Great and while we're getting that How long so the code says that we are gonna hit I go go up to a 30 seconds on the 22nd And it's we are going to go up to a 39 seconds around on October 27th Okay, so is as far as testing goes. Oh awesome. Welcome Demetri And actually Demetri able to hear me Yeah, awesome. We were just talking about If testing what if the testers from the testing team and the client teams felt like they were comfortable Declaring a test net block number today that we could do that and then we could declare a main net block number Sometime after the test net starts or right before it starts and then if test net goes well We'll keep the main net block number. Otherwise we can delay it if we find some issues in your opinion Demetri Where are we in terms of testing and are we comfortable enough to launch the test net in the next week or two? Yeah, I think we're comfortable to launch the test net but still we have They don't have any news from party to client or other client the just that only get He's working on a hive consensus issues that they are having right now I Think I talked recently I talked to Arkady and Arkady is it still the case that? You believe in the next week or less at this point. I guess 7 to 10 days that parody will be passing most of the hive test Yeah, we should be passing all tests some of these Okay That sounds good. So if that's the case Then I think that doing something like having the test net 10 days from now. So something like the 18th Might be a good idea What's everyone's opinion on doing something like that? Sounds good so no the better Yeah, I agree and I mean the reason for that is it's a Monday I feel like Mondays are better to launch things because you don't have to work the weekend if you launch it on a Thursday or a Friday and things go wrong or anything happens So it looks like Let's see If we do that on the 18th, and then we run the test net for two weeks we could launch on October 2nd How long should the test net be run for because I've heard figures between two and four weeks I know there's some period of time where you know, it doesn't really make a difference. How long you run it after a while We just kind of know it's gonna be good or bad What's what's the opinion based on previous experience with this? So this is how long we run the test that before the switch over to the main network. Correct. That's the question Okay, um, so my answer is probably still gonna be around three to three to four weeks Okay Peter, did you have an opinion? I saw you talk for just a second. I just want to say that Basically if things go wrong, I can imagine that they will go wrong pretty fast I guess I would have said two to three weeks doesn't really matter But yeah, I don't think it makes sense to run it for much longer because If things break it will break Okay Well, the good news is we don't have to decide on a main net number today But it sounds like the main net number may be on the 2nd or the 9th of October So something like that would make everyone feel comfortable. It sounds like Also doing it on the 18th would give us 10 days To finish up testing so that that's gonna give us a little more time than we need probably from what it sounds like We can prepare. Personally, I'd prefer the 9th over the 2nd, but you know Okay. Yeah, well, I don't see why we should you know what playing it safe sounds better Yeah, and so like in yeah, so the block number will probably end up being like either 4.35 million or four or Four point somewhere between that and four point four million if it ends up but dragging on later And that's for main net, right? Yeah for me Ned. Okay eighth and then uh Just one slight note Let's please not decide the final block number before the testnet forks because I think for For the homestead release we picked a number which was only a preliminary number and then we kind of delayed it and that resulted in Actually, apparently they had a release with the old number and then they did a new release with the new number and some of the clients forked over because they haven't updated And they thought that they already have the correct block number So if possible, let's try to see the main the test network first and then finalize the main net number Okay, so as far as the testnet number, what's a good number for that that would fall roughly around the 18th Of course, uh, yeah the the 18th One note about the testnet block number is because the difficulty is Very low, especially relative to the main net um if more hashing power uh comes online and and starts mining on the test net then it can accelerate the the data or time when the when the block numbers reach so should probably allow for one to two days target one to two days Later than the intended date that way if if the hashrate goes up by a hundred acts Then it at least it doesn't the you don't reach the block number um earlier than intended Mm-hmm Okay, so what would you recommend? I guess that would be not doing a that would be doing a block number that is a bit ahead of the 18th I guess or am I am I thinking of this backwards? Yeah, that's right. I haven't um done the numbers myself, but I can work on it real quick here Yeah, no problem. We still have some other agenda items so we can kind of come back to this at the end Whoever wants to try to come up with a nice clean number for around that time I would like to suggest I mean since the the test net can be easily influenced wouldn't it be better to if we say today that forked the test net on The 18th And set the block number. We don't decide the actual block number today But decided later for those less variance Yeah, I'm okay with that. What would be the minimum amount of time before um Before a test net is released that a client would need to be released with these with these numbers Well, honestly, we're already pushing that limit Yeah, I guess what I'm saying is like if we decide the number three days before then everyone will have to update their clients, which is okay Um, but I don't know if that would cause problems if a lot of people stay on the old client and keep keep running Robston Uh, some clients will definitely be on the old Robston. There's already a lot of clients on on various forks of Robston Anyway, the important thing is is the hash power that points it at um That the hash power is updated Uh, the so that the miners update more than all clients okay So something like deciding the block number on the 13th or the 15th would that be a combination of enough lead time but Avoiding a lot of variance in the block number beforehand Yeah Okay, next Wednesday sounds good because it's midweek. So if there's any complications we can deal with it Uh, let's say the 13th anybody opposed or want to push it to the 15th I don't know enough about you know, how fast these things can change to really make a determination I'm just kind of throwing out numbers um I think um, I do know they change quickly By the way, does price affect this? as far as Yeah, so a pray the price um going down increases the likelihood that mining power goes down which Makes block times be a longer Um, but well, it has this the long term So it makes block times a longer though. It does have the one kind of happy side to it Which is that it's slightly delays the next the the next two haveings Yeah, won't mention it on this channel. Just everyone check get her real quick But uh, basically there's a rumor that would potentially decrease the price. So that's kind of why I'm thinking Not just a rumor. It looks like it's in from everything I see inside the chinese channel is very likely true Yeah, so basically due to that information that I won't say because it's just rumor at this point That would kind of make things weird Anyways, uh, man, that is interesting Uh Basically, let's decide it next wednesday then I guess unless does anyone have any opposition for deciding the test net block next wednesday um Seems fine to me Uh, um just for an estimate the the test net Rapsons currently at 1.63 million so If we do 10 days, then We'd be about 1.69 million less. I'm mistaken Uh, and so one extra day Uh, would be 1.7 million which is a um a nice round number Mm-hmm Okay That sounds good. Um 1.7 1.7 million would be a nice round number. Okay. I have that noted Oh, my cat's freaking out. Um, as far as the main net test block, um So just basically What I'm wondering is yeah, if the price drops dramatically What does that affect the block time is the thing I'm wondering because that's what a lot of people in the community would be wondering Um, how does um, what affect the block time? Uh, the price dropping how that affects the I guess. Oh, yeah, so in Like basically because it because it makes the difficulty the effect of the difficulty bombs like much stronger But that's normally if mining if hash power really drops in our previous experience even large hash power Like price drops have not in the short-term what the mining power drops Okay, great. So it sounds like the risk isn't that high for the economic factors here Yeah Cool. All right. Um, so we have that block number done. What's the next item on the agenda? Uh, any other comments on the block numbers and stuff, uh in summary Uh, something around 1.7 million is an estimate For the block number on ropston. We're going to decide it next wednesday to see how the very or to see how Uh difficulty and other things change in ropston between now and then And we're tentatively trying to hit september 18th for the test net block number At which point we would start deciding on a main net block number That would hopefully be two to three or sorry three weeks after ropston Uh, which would put us at october 9th. That is an unofficial number Uh for all the coin media that keeps saying september Uh, it's not september So it'd be october 9th potentially unless there's something goes wrong with the test net Um, so item number two on the agenda is the snappy compression for dev p2p. That's um Eip 706 oh go ahead Yeah, just a quick so on when we decided to test this number. Could we please just make it a quick? Someone's hope faces suggestion to the all core dev channel and get her um Those opposed can raise their hands And we can make it I think that's a good idea. We we're not going to have a call next wednesday I should have made that more clear. It would just be a getter chat Great Yeah, it would just be a getter chat Uh, because we already have somewhat of a number and if things change we can just adjust that number For for the block number. I should say Cool, uh, any other comments Probably just a getter chat up. They at the same time as the regular scheduled Calls here Yep, you're talking about next wednesday, right? Yeah Yep, uh, so same time as the regularly scheduled call. So 1400 utc Uh, we would have a discussion on the getter channel about the block number. Um, I'll put that in my notes as well So 1400 wednesday, okay um So the next item is eip 706. This is a snappy compression for dev p2p Uh, peter came up with this it reduced the sink bandwidth by 60 to 80 percent And would affect the network layer of the protocol. Uh, I've reached out to swarm developers I think daniel who works on swarm has already commented a little bit on it Um, and we've received more comments since then in the eip. Let me post it here So peter, uh, if you can just give a quick summary and then we can discuss So, um, the crux of the ip is that previously a few I think when ethereum launched We had some plans to extend the dev p2p. Basically, that's the base networking layer of ethereum So that it employs compression because currently just encrypts whatever upper layer network protocols throw at it and just transmits it that way And we had this idea but nobody ever pursued it because it was kind of bashed together with some other changes and And the last few days I was running some benchmarks to see what What perform basically how compression would impact performance and it's quite staggering. So In as using go ethereum to do a fast sync on main net it reduces download traffic from 34 gigabytes to 13. So basically that's about the 60 percent save And on wrinkabay down it reduces from 2.6 gigabytes to about less than 500 megabytes. So, uh On wrinkabay, it's more spectacular because it's quite full of spam but But the idea is that the compression really Can do an enormous save on on the eth protocol So, uh, so I created I tried to I tried to figure out how how much code or how much effort it would take to actually implement slappy compression Or I mean compression in general for for the dev p2p and it turned out that it's really really simple So it's um all it takes is whenever so before a message actually gets to be encrypted and put on the wire We can do a compression round on it and similarly when When it's a decrypt from the wire we can do a decompression round and Basically, I managed to the whole code for this in go ethereum in the go code is about 38 lines So I think it's it's worthwhile, uh addition and I kind of wrote up this eip to discuss it Technical wise the eip consists of bumping the version number of dev p2p to from four to five and basically whenever The handshake arrives, um client if the handshake contains version number five and clients can Sorry clients should do snapping coding from that point onward Uh, there have been some discussions with corner cases and the eip you can find it on listed there Um, so one of the first questions is whether people feel confident about rolling out such a change or not I've heard one concern about this change from uh from the pi theorem team Actually from pi permerium He was saying that a python itself doesn't have a pure python implantation of snappy. So This would entail python having to actually wrap the c++ Library, which they are a bit reluctant to do as far as I know all other Clients, I mean all other languages go rust JavaScript. You have fewer implantations for snappy So this the only concern was that uh One open question is whether we want to have compression mandatory or whether we want to it has a I don't know maybe a compatibility flag and then clients can signal whether they want to compress or not So these are basic micro two questions one is whether there's general interest to include something like this And two whether we want to if there is interest whether we want to make mandatory or optional mm Is there some like you know a wide way of your whole specification of how as snappy works So, um I linked a wikipedia article. I I am personally not familiar with how snappy works internally I'm not sure whether There's some information. I think he may have looked into it Sorry, that's the question So the question is whether you have any knowledge about how complex snappy Is internally I'm guessing how I I looked at briefly before although not in great detail and it's relatively straightforward if I recall correctly Um, and of course as with most compression algorithms, if you really wanted you could implement a dummy one That basically just doesn't compress anything I see so in in essence, it's important to note that the the fb so the The ip proposal is backward compatible in that a client may choose to run version four and not do any encryption So it is perfectly fine for some clients to upgrade now and some clients to upgrade in two weeks or in four weeks Or whenever they feel like upgrading so the clients can do this completely dependent upon another I think it's a good idea pending more discussion So, um, this would not require a hard fork because it's part of the networking layer. So the clients would just have to implement this, correct? Yes If no, I mean how much more discussion do we need? I guess I think or do we just we just answered piper's question. Didn't we? I guess I was unsure if piper's question was effectively responded to Also, and I guess the other thing is like andre and a few other people had a few Sub questions like to enforce a 16 megabyte limit on compressed size and Uh, a few different things. Oh never mind. I see okay carol. Okay peter you commented a couple hours ago about updating the spec for Replacing lazy with 16 meg. Okay Yeah, so so currently spec states that the decompressed size is Limited to 16 megs and this essentially in has the same limits enforced as the current fb to be protocol so this is nice because We don't need any extra effort around lazy decompression and we can basically can use the same Hoos as we currently have so if the message the plain text messages or the decompressed message is larger than 16 megs then it's just thrown out Okay, um arcadie. Do you want to elaborate on a comment? You just made um about having this be a sub protocol level Yes, I've just said in the comment suggesting that It should be negotiated for each sub protocol Because some sub protocols might want to disable that And why would they want to disable that? Is that for different sub protocols that would be implemented within the client? Well, yeah, because some of them are already Well, at least we have the one parity that already uses compression on a different level And we don't want to compress everything twice well, uh, so So one of the problems with this is that it all of a sudden this means that I explicitly mentioned this scenario in the eip in the discussion I mean in the spec that the problem with Doing this on a sub protocol level is that all of a sudden If if I roll my every single sub protocol all of a sudden needs to be aware of and needs to care about the compression And uh, so that's that's already getting murky and uh regarding the speed as Nick was pointing out that That snappy essentially is kind of almost as fast as a mem copy if it doesn't actually compress so the go Go implantation of snappy You actually make it compress the jpeg images, which are probably not really compressible You can compress so to say with about 15 gigabytes per second on a single core So with those numbers, I I'm wondering whether it matters if it does a round or not So i'm not suggesting completely taking it to the sub protocol implementation So it will be still handled by the fpdp, but There's a protocol will just Dequare one bit if they want compression or not, and that's it Well, yeah, but then the internal protocol needs a heavy update because all the capabilities all the handshake message needs to be uh So currently the handshake message can be extended with extra fields But i'm not sure whether the capabilities field itself can be further expanded I don't think the protocol supports that which means that to do such a thing It would require a major update the fpdp Well, your updates and inversion anyway Well, sure, but if uh, it requires a ton of updates from everybody, then it's much less likely to be Implemented by anybody The supporting compression also requires an update from anybody, right? Yeah But adding 20 lines of code to your code is much simpler than adding 200 I don't think this is much I mean the question is is it worth the complexity because snappy is ridiculously fast And it's going to be a big gain in most cases. Is it worth complicating things to make it switchable? htp regularly You know gzip compresses stuff that's incompressible and it just works because it's the simplest way to implement it Well, htp has headers that allows to turn in the column off, right? It does, but i'm saying that in practice a lot of servers simply serve everything compressed and gzip is a whole lot more expensive than snappy Now sure my point is about given choice But do we need the choice like is it actually important? Well, I said it why But given that snappy is almost as fast as just mem copying it and Your processor and and memory bandwidth that orders them into faster than your network I don't think the impact is actually meaningful Yeah, I think I think arcady the thing is like if there's nothing If there's nothing currently remotely faster than that then implementing it as a sub protocol would only complicate things Rather than having everyone go with the same thing that for the most part shouldn't be superseded for a long time If there's something already implemented that's faster than snappy that everyone can come on board with that that sounds like it would be Better is there's something that's implemented at a sub protocol level that could be implemented everywhere in a that would be faster than snappy Now that's not my point. I'm not suggesting alternatives to snappy. We're already using snappy I just that I don't want to compress the data twice with snappy But I mean I understand that that's a bit of a code smell but given that it's not actually going to have a meaningful impact on performance Does it matter? I don't know if it's A little bit of extra effort. I don't think it will be more than another 20 lines of code Okay, so it sounds like what you're saying is it's doable arcady. It does add a little bit more complexity to your to your client though Yeah, but it's Mine complexity. It's not that complicated. It's complexity to everyone's client not just parity's clients Or no, what I should say is complexity would they be having to change their sub protocols to not compress things twice? That's how I understood it I Don't think it's on the table to change the party protocol. Yeah, it's not only about the network. It gives the same format for Storing the files and all that Well, yeah, anyway, we can discuss it in the GitHub issue Okay Yeah, I'm okay discussing it further in the GitHub issue and getting a few more opinions Um, and then the next core dev meeting. Let's bring it up again Um, I think by then there will also be some more people who utilize dev p2p um Who might might have some other opinions on it. So Um, that that's not a problem. Also, uh, we can get kind of better descriptions because I think there's just a little bit of confusion with What the arguments are about So we'll we'll talk about it more on the eip. That's a good idea arcady Okay Thanks, um So let's see. Is there any other topics or agenda items? Okay, great. Think of anything. Yeah, I'm good too. Um, it looks like Uh, the estimate is around the 18th. We will be doing a test net fork Uh, potentially that is block number 1.7 million, but that um is obviously can change Uh at 1400 utc Uh this wednesday, we're gonna have a getter chat room discussion on what block number the test net will be Um, and then after the test net um After the test net happens Then we will decide on a main net block number that will hopefully be Uh, like this is super early, but we'll decide on a block net number between Uh, the ninth and the 16th, I guess just some something in early october. Basically, uh, the the ninth has been thrown around as a date But that's nowhere near official So the main net block number will be after we start the test net. It sounds like, uh, although we might have some recommendations beforehand Um, thanks everyone for coming. Unless there's anything else. Uh, everybody. Have a great day You too. Cheers. Bye. Cheers. Thanks everybody. Bye-bye. Bye-bye