 Hello. Hey, good morning everyone. Justin. I will only be able to attend for like 1520 minutes and then my plane is boarding, but I will. It's kind of chill as long as I can. Okay, cool. I also didn't really have very many topics. Maybe to continue to be I discussion if we get someone that's not Dano. But other than that, not too much. I think we, I have, this is just a general note is that I'm going to follow up with hyper ledger on the funding of the CI, like next week. Consensus is funding will not run out, but we'll have to basically, we're going to use the excess credits by, or the excess credits expire in March, so I'll be sorting out on my side what to do between the expiration date which is like mid March and the coming up withdrawals, which is probably a couple of weeks after that. Good morning. Good afternoon. All right, good morning everyone. I don't think there's too much of a set agenda today. Does anyone have any items they'd like to discuss off the cuff. Not really. I mean, no, like just say that we're getting to developers have been running like you met initial last time right in Georgia as well. And I was hoping to have them on this call but it's Valentine's so they were busy. So yeah, that's how it is. No problem there. Yeah, no, please yet as they keep getting up to speed definitely keep or take advantage of the discord and ask as many questions as needed we definitely love to get those folks up to speed. Yeah, Gary, you wanted to chat about release this. I agree that's a great topic. Yeah, I was on pto so I'm still trying to catch up on the traffic about 2210 for and RC to so just a quick recap would be helpful. Yeah, so, you know, 2210 for we think we're going to scrub in general to focus on 2310. I don't know if that changed yesterday I was out as well for the day off. You know, Kareem was an Amazon and some others were working on a last we had a pretty large regression that made into main last week around the get balance fixed Gary that you had put in. So we sorted that out by reverting that fix and I think Kareem had some, you know, there's another approach to finish fix get balance and the layered world state issue but I can like Kareem discuss that more kind of after this, but I think our plan was to 231 and potentially still cut 2210 for but we think that since we're getting close to the Sepolia fork and some other stuff that it doesn't really make sense to put out two releases, especially when one is becoming somewhat mandatory. Going forward for main net. So we that was kind of where we landed potentially just scrubbing 10 for to focus on the quarterly since folks will have to update to test on Sepolia and to run any of the Shanghai related stuff on main net anyway. And if I misrepresented that based on new info what that that would be good to know but yeah I think Kareem has the most recent status. Just regarding the issue so at the beginning they decided to refer to Gary's PR to to fix the layered world state issue but still having the get balance equals to zero so with a median we worked yesterday in order to find a fix and we found the fix so so in the last year we are merged with a median so the get balance is okay and we don't have anymore the layered world state issue. Regarding the PR carry so just the issue was was related to the method. We are using before it was impacting the trial log persistence so so we we had a lot of trouble to find the reason but no it's okay. Yes, so I can I can explain more but if you will have all of the description if you want in the PR if you're interesting. Yeah, I reviewed the PR yesterday and got the gist of what the problem was with the layered world state. So yeah that's great. So it sounds like we're just going to do a 23.1.0 RC to think we've already burned an RC to and we're going to burn a potentially a third one with this fix the only intention was to cherry pick this fix into the next RC and then release with nothing else. Okay. So yeah we can't really delay the quarterly anymore. So just fixing the last get balance stuff test that again and then put out the quarterly. Okay so we're going to we're going to cut a 23.1.0 final and burn that in and then release it. Yeah, let's get some thorough reviews on Koreans PR but that was I believe a good plan forward and then hopefully we can get something out around the latter half of the week. And in an extreme case we can release early next week but would prefer to avoid that. For timing on the Sepolia fork. It's next month it's we need to have a client release ready on Monday for the EF blog we can use a develop image or like a special special pre release image if we want to have to check with Simon on what the specific status of the Shanghai components are because I know not all of them are merged domain. So we probably might we might cut a 23.1.1 kind of pre release image just specifically for that and let people know not to use it. You know people people will probably still pick it up for some reason or another but I think that that was going to be the plan and not bother with trying to rush something out in Maine for next Monday that was not ready. Cool. So that's the release regarding kind of a follow up from the last contributor call I've written up basically notes on what we decided as far as the grant proposals and things from last week. I'll share that out sometime in the contributor channel this week and put up a wiki page so that we can have a process in place to start pursuing grants. It's pretty much in line with what we discussed. Last week where we if it's a retroactive grant we determine you know basically on a case by case basis how to divvy up the funds probably just among all the contributors frankly if that's what the scope of the grant is and then going forward it's just pursuant to who is doing the work for the grant and the still work split there and so just basically making use of splitter contracts, like we're already doing today, and then letting the grant or be adjudicator in any, you know, kind of fail over mode that we might need to clear that proposal. Probably just tomorrow I just need to clean up the text it's very short for some agreement. We did pass the first round of the optimism retroactive kind of grant funding. So I might just continue on in that process. And since it's retroactive and you know all the contributors worked on the grant I would basically just be inclined to split it evenly along the lines of the contributors and set up another splitter contract like we've done with the just among you know the main maintainers of the code base and then tie that to whatever organization overhead we need to or any other thing so if we need like a swirl the dress or a different address or just to maintain this is an example but you know I'll put more information on that because the grant needs to go in next week, but it's very straightforward it's like 1200 words so it's not going to be much I'll send out something for everyone to review, if they want, and if not we'll just kind of keep pushing that process along with input and transparency So I think that's a huge point in the contributors channel. Cool, cool, cool. Okay, so we discussed the release. We discussed grants. Do we want, I don't think we necessarily need to continue to di discussion right now. We didn't really make much progress Justin I know you were toying with some of those notions anything to share on the di or modularity front. I mean, I've been updating the wiki quite a bit, kind of trying to form a little bit more concrete plans and, and thoughts around the things that we've been discussing and pushing around so I will share those links in discord ongoing process obviously always welcome contributions from everyone else. Christian, this might be a good point to get some of your team's perspective regarding kind of modularity and, you know, kind of carving out some of those private network components. Just as a thought. So if maybe Justin we can reshare those links in the contributor channel so that everyone can take a look, because I think it'll be most pertinent to, you know, the kind of use cases that we're trying to support across a number of network types. So may I share it there and I'll bring it up in my standard within tomorrow morning as well. Awesome. Thank you. Okay. Regarding. Yeah, so, so, Dana one thing we discussed before you joined was, I'm going to be following up with hyper ledger around the CI funding in like now ish, because the consensus extra credits that we have on our CI account that are currently funding this stuff will be running out in mid March, so we're going to be hiring rather so we're going to be pursuing additional CSI funding for that but at the same time I'm going to follow up with HLF to make sure that everything is in line to start using the CI P funds as soon as withdrawals are gone through. Sounds good. Cool. Any other topics from the group on the next contributor call. I'd like, I'm going to walk through basically some notional roadmap stuff and try to talk through what that looks like. I think that, you know, I'm at an airport and I don't think it's use good use of time currently but any other topics I know Dana you came off mute there. We want to create some new rules in order to avoid some regression or to, to add more step more test I don't know just to talk about that if someone want to change something or not. I was juggling some stuff. Sorry not just wanted to know if we want to add more to change your way to test the PR before merging or something like that in order to avoid the regression if we can maybe it's not every time possible to detect all of the regression but do you think we need to modify something to avoid this kind of regression because it seems that we had sometimes a lot of regression so maybe we need to change something. It depends on the scope and what the regressions are in I mean how big is the test we would need to run to find the regression we don't want to run a full three day burden for every PR. I was thinking maybe something like if someone is modifying the sync part maybe testing just a checkpoint sync or snap sync and if someone is modifying the, I don't know the EVM it can run a node during one day or two days I don't know I'm trying to, to check if we need to only trust the reference and unit test or if we need to run a node sometimes to validate the PR. It seems sometimes it's complex to reproduce all of the use case with only unit test or reference test it seems that sometimes we need to run a node or something like that. Right for example the recent regression with the KZG is doing a memory leak I mean that was there's no unit test to get that because we're not going to stand out the test for six hours. Yeah just something like that yeah if we if we need maybe before merging a PR just to run another during one day or something it may be to slow down the process before merging a PR but maybe it will add more protection or something like that. And to push maybe the log or something like that to the PR like that the people can check the log and maybe find something I don't know. I think we can invest more time in the nightlies that the nightly nodes used to be a little higher value and I don't think that they're, I don't think we're making good use of them right now I think that instead of, you know, blocking merges of PRs, we can, we could invest more time in the nightlies and discover things on the on the nightly process and then just come back and do some bisection if we find issues. But if we have if we have big changes maybe we should just raise the bar for for merging those on a case by case basis. I like nightly but sometimes I think sometimes it's too late, because if you are if someone is pushing something with a regression in the in the main branch. So you will detect that maybe in the nightly one day later, and the other people will merge this main branch in their branch as they will think that the issue is coming from their code. So maybe we can lose sometimes because of sight, but I don't. I would prefer to detect that before merging in main but maybe it's not possible because the test are complex. But this also runs counter to the desire for smaller PR smaller PR is your easier to review. And the more friction you put in a submission the longer it takes the PR sizes grow. And then you're as a standard practice getting these monster 10,000 line PRs that are hard to merge and have huge impact on other submissions. It's a tricky balance between keeping developing going keeping contributions up and getting good quality reviews and finding your questions. We don't want to move too far to like, you know shut down external contributions is because we want to run a test for six hours before we submit anything. And then there's the cost of writing these tests on every PR. That's going to cost even more CI or whatever system we run it on. Okay, so I mean, there's got to be some kind of middle ground right between these two. I mean I agree absolutely with they know maybe the goal is to kind of chunk out the PR is in the smaller components test more easily with our existing tests kind of apparatus, or is that not necessarily useful. But a lot of the problem is that a lot of these PRs that are especially intended to fix these regressions or bugs have to touch a ton of different stuff. And at the cream's point we just kind of talk about specific modules like if if you're impacting the bonsai code, maybe there's a more thorough review process or just more kind of air prone or complex areas of the code base. I don't know if that's had there's an appetite for that at all because it would put more review overhead, which is something we might want to do anyway, instead of, you know, adding more testing processes where just add more review more review processes. It's kind of more than a glance over. We thought about this before with specific code on ours but we didn't necessarily move anywhere with that proposal. I think I'm always going to prefer more testing over better reviews. I think that's a very subjective and very difficult solution. So, in light of that I think we do have lots of really good we have a we have a wide variety of scopes to our testing. So we have unit tests we have systems integration tests we have one offs we have burn ins we have more systemic tests I mean maybe we just kind of need to put a little bit more thought into the test that we write and which level that they are at, and kind of to sort of the design process for when we when we work on a PR. Any thoughts on that. I think that a lot of times our PRs lack a little bit of lack of lack of context in the description I think that uploading artifacts and the test methodology that's that you might have used for like local burn ins or things like that might also be handy just for other people trying to review complex code as well. And yes, sometimes just sharing in the PR the log of a node running this PR can also help sometimes to discover an issue or something like that because sometimes maybe the reviewer will find an issue just regarding the logs or I don't know. We can maybe miss something or I don't know if you will find a solution today but it was just to discuss about that. We want to move on. Or unless there's any more comments. Any other topics from the group. Have we have we abandoned the approach of a forked rocks DB or do we just kind of kick that can down the road for a future release. The rocks DB developer they said that normally it will be in the next release, and I, it will be the eight dot zero. And I think maybe it will be available. Maybe at the end of the month or the next month, because it seems they are managing several release at the same time. So, I checked the old release, the different moment when they decided to release, and it seems that maybe it will be the end of the month or the next month. I'm not sure but I think it's not too long to wait for the next release. Okay. I noticed that they had a seven dot 10 already that was unreleased in the seven dot the seven dot nine series was unreleased so I thought it was going to be longer than that. It seems they are doing several release at the same time they can do seven dot nine dot 10 and at the same time, releasing eight dot zero, I checked and every time they are doing several releases at the same time so. Okay, cool. Do we want to have a backup plan in case it's not released within the next month or so. I guess the pretty significant upgrades to get out. I guess the backup plan would be to just continue to try to do the fork release which seem to be a pain. Yeah, I think the worst thing about the releases that it requires the tooling that they've got requires builds of multiple architectures that have to be joined back together into a single artifact. But that was kind of the line that we drew in the end of if we can't just build using the, the tooling that they have in the repo directly that we didn't really want to commit to kind of a frank and build process. But I think we can have that in our back pocket if for some reason this gets delayed too long. Okay, that's fine with me. Let's see if we'll revisit this in like two weeks at the end of February but I'd like to try to have it out this quarter so maybe if we need to mess or, you know, do something shenanigans weekend. All right, any other topics for those of you looking hand back to half hour. One thing I'd like to discuss the, I did a couple of commits to the EVM tool that are under review for T8 and B11 are and I just one of things that I'm also experimenting with this growl VM. I'm getting support for that and I want to know what the appetite for some of these more broad based changes are that might be required to support growl VM. One of them is to move away from using the Java security services and just instantiating our encryption directly. I mean as it stands we're specifying a provider and a specific algorithm. So the Java security stuff is just strictly overhead that presents reflection issues and growl VM. Another one is moving all references to log for Jay into a very few classes and making sure that we can run without it effectively because growl VM hates log for Jay for some reason. And what's the other, there's another small clean up the needed to go across the systems. What's the appetite for getting some of those more broad basis. Oh yeah, upgrading bouncy castle or about two versions behind on bouncy castle. And it's also I've noticed all our dependent libraries stuff coming out of tech who as well is still two versions behind the reason is they changed their artifact name from dashed JDK 15 on JDK 14 on to JDK 18 on. They changed one of the names. So our auto dependency checks don't catch it anymore because we have the most recent version of that artifact because it changed the name on us. So is that something that we probably should look at after the murder not to the murder jack for Shanghai. What's what's the appetite, you know something's are small it can be broken out. But what's what's the perception on on the appetite for the rest of the team for sporting changes like this. Could you elaborate a little bit more on like the upside of using growl. So, not shifting everything to growl. This is entirely an exercise done for the command line tools. So I think performance wise, using the existing hotspot compiler is for something that you're going to leave up for days on end is the correct choice because you'll get performance. But when it comes from integrating basu into testing processes like having basu produce some of the reference tests. And so we needed to create the TN and the B11 our tools to support that which my existing PR there adds to the the EVM tool, but to support the these tests, they want to start up and run these consecutively started up run it shut it down started put it shut it down. And so for one of the test tools there was a combination made to stream it for the for the fuzzing. Marius got it to stream in some of the fuzzing stuff for for the state tests. But that's going to be a harder lift for the execution specs tests. That's a new repo that's spinning up it looks to be the primary way that new tests for reference tests are going to be showing up. So within my grovm work. It took down an eight minute run down to about 10 seconds, and all that is almost entirely in startup time and pre compiled calculations. None of these tests were exercising the EVM for performance reasons. They were executing correctness questions you know I give this transaction, what's the result, shut it down move on so the start up was just you know 90% of your 95% of your run was was stuck in startup services and 5% of your room is the actual processing and then it shut down and bring it back up and 50% is probably even worse than that was like 98%. So, these changes for the most part that would be needed to support a growl compilation of EVM tool seem to be fairly benign, but they are fairly broad changing like log for J support, moving that all behind SLF or J but keeping it for, I want to say as I'm not recommending changing anything for the regular runtime. So just before making it so the growl VM can compile it. And that way we would have to support so maybe we could experiment with trying to growl VM. No, but I almost guarantee at the performance isn't going to be as exciting as we would hope. So those were some of the things I was looking at, and wanted to get feedback on for I went too far down that rabbit hole. Thank you. Yeah, I kind of agree with the notion that like all those things that we're going to do are not bad anyway, you know, like upgrading bouncy castle definitely need to do that log for J stuff sure you know get away from that specific interface. So yeah, I mean, I think that I have an appetite for that. So we're going to get rid of the retest ETH sub command along with that if we've got a TN tool that we can use to run the reference tests. Once we have it running, I would imagine we would. We could, you know, because guest doesn't like running it, despite, you know, the fact that that Dimitri really, really wants it, but the new testing, the people put in new spec tests together they're kind of competing with the testing group I don't know who's going to win that battle. They're not going to win or match to watch. They're completely ignoring the retest at the API's and focusing entirely on T8 and B11 are and maybe T9N. And they're probably get another, you know, T9N is a transaction validation T8N is transition. B11R is block builder and I expect we're going to probably get another one in the mix for code validation when EOF ships and neither can connect or prog. So getting these command line tools working in performance. And neither of mine doesn't have support for this and it would, you know, increase our story that we, you know, we participate in the standards making process because we also have standards compliance specifications. About the same time get does so we could use it for differential testing a lot easier when when spec problems come up. Great. I think one of the things that I did notice with the reference test suite that we have versus the retest ETH tool, not the sub command but the retest ETH tool. It tends to be a bit more strict, not a bit more a lot more strict like we can pass our own reference test suites and miserably fail on the retest ETH tool so it would. It would be beneficial to be able to directly use the retest ETH as part of our reference test pipeline. Yeah, I think retest ETH issue there is I think there's some parallelism issues. We still have some static fields that are cross contaminating across different forks so you get one of those running at the same time. And then when you're running parallel and same VM just load issues with T8N there's there's no such concern because you own the entire process. The Grail VM changes that is of course. Okay, so I'll once I there's a little issue on Hadera I need to work with once I get through that I'll start going down that path again and again in PR is ready for the smaller changes. I did post the the the giant just for just for a posterity in case I forgot and never came back to it of all the changes that make the Grail VM work currently with with the EVM tool. So I'll break those out in the smaller ones and get those going probably random going into the after ETH Denver. That sounds great. You don't anticipate any other kind of runtime improvements just kind of in the testing apparatus. In the short short horizon that's that's what my plan is I have a lot of, I have a long list of performance improvements that we need to do like moving away from the cash changes into a journal change list for, you know, the, there's a lot of internal performance improvements that I do but as far as external facing, it should do anything that you should notice it should be faster have less, less exploitable configurations and exploits are like all. Nothing's worse than a kitchen by the book so that's my standard for is it an exploiter not is doing this is it worse than doing a large kitchen and if it's not and it's not an exploit there's better ways to waste your gas. That sounds right. So, I'm as the end did you do any looking at grow VM I don't know, because I'm probably explored it at least a little bit on kind of the main performance track. Yes, actually, I'm testing currently different implementations so I'm running different notes on open JDK open G nine Zulu, and, and grow VM. So I already have the sink metrics so I mean it's quite similar so the sink times are very close the the blocking part time is very close. Yeah, so nothing like to explode to exploit from that. But so after seeing I had some issues, because we had all the notes with corrupted database because of the last regression so we had to. I had just to sink from scratch and so now all the notes are running and I'm analyzing different metrics and yeah we it seems that we have like different block processing time with different implementations, but I have to dig in and yeah I understand what's I mean what's going on and how like what's why let's say why grow VM is faster than another implementation. I totally understand that was use case as grow VM is known to be very fast at startup this is like the first use case known about grow VM if you have if you want like a GVM with a fast start up just choose grow VM. But yeah, I'm currently testing different implementations and I will share the results. Okay, so I should probably clarify I'm using grow VM AOT compilation versus the grow VM is a dynamic compilation source. I ran that years ago and came to some of the same conclusion wasn't exciting enough to push it to go down the rabbit hole at other things I needed to chase on it was a little bit better but I like spectacularly better. But that's when using grow VM is a dynamic hotspot style compiler just in time compiler. The work I was doing for for the EVM tool was static compilation to compile it down to like a 30 megabyte binary that you run on the CLI completely disconnected from all dynamic JVM environments. And that when I say that grow VM is going to have performance issues that's where we're going to have performance issues because all the compilation is done ahead of time and there's a whole lot of optimizations. It can't figure out from the burn in of what's being used quickly and what's not being used quickly. So, when I when I said grow VM and my stuff I was specifically mentioning the grow VM ahead of time compilation modules. So the dynamic grow VM has some good stuff it also has some, some weak stuff it's picking a VM to run is as tricky because you win some things you lose some things. Yeah, now we can. I think yeah we picked up on that too but would be interesting to see the second round of results how it goes and what, if possible makes sense to come on. Anyway, cool that'll be good MSN field for please definitely share those metrics when we have them in this board. Any other topics folks. Great. I will. Yeah, share these notes out after the call at some point today. Happy Valentine's Day to all. Please enjoy the rest of your weeks and we'll see you all on this board. Yeah. Thanks everyone. Yeah.