 Hey, dino. Hello, I was checking to make sure I had the time right as the only person on. Yeah, we just run a little long on the show and tell folks will show up here shortly. I think this is going to be a fairly quick one. There's not. There wasn't a lot of items that anybody added to the contributor call that I saw last last modification I'll refresh and make sure. I should have added my stuff, but I was taking the weekend off finally. Okay. Good. Good. So, here's the antitrust policy. Everybody review the antitrust policy. That's your leisure. If the meeting is being recorded, please mute unless you're speaking. And if you have a question, use the raise hand feature. So general announcements. I don't know if it's really a general announcement, but more of something that I was hoping we can get some resolution on today is what we want to do about the GitHub runner limits that hyper ledger has run into. I think GitHub has started enforcing runner limits that they previously didn't. And in order to get a log jam of runner utilization cleared what we've had to do in a variety of different projects and hyper ledger is to have self hosted runners. And we have currently consensus has five amd 64. Basically easy to instances that are kind of augmenting the build load. And then when Rai has also clued us in that there's also the arm 64 runners and so forth. This really this only affects the GitHub actions stuff so this is more CI stuff but this time is not circle CI. I don't really know what the cost associated with the runner limits, you know upgrading the GitHub plan versus the cost of just providing basic specific runners is, but certainly looking for some some input on that if anybody's got any more color and love to hear it. Unless Rai is going to come on I don't see more color happening I'm not part of the TSC anymore because the Ethereum calls I don't think any of us are. So really we need to hear from from Rai. Okay. So maybe what we need to do then is to have create an action item to follow up with Rai to to see what we can do about this I think it's been kind of ad hoc. I had a couple of conversations with him about it and it seems like this is being treated like a DIY this is a problem that is being outsourced the individual Yeah, I think in the to your point I think in the absence of rival probably won't make a lot of a lot of headway with that so I'll take a follow up item to discuss that with him. Release updates. So, we had planned a 23.4.0 RC one that was going to be delayed until after Shepella, just so that we could prevent confusion with stickers and note operators about what the correct version to run is for the Shepella fork. What is planned to be an RC one right now is rock DB eight transaction pool layering. And the work that Justin's been doing on dagger to start daggerizing basic command and metrics, I think is the first cut he's making with doing dagger. Yeah, so we're still planning on delaying until after Shepella for RC one however, there's been a couple of issues that have cropped up in the last since Friday that are hot fix worthy. So we're planning on a 23.1.3 hot fix release and there's actually currently a hot fix release that's burning in right now that includes this particular issue. So the hot fix for execution payloads, they come in and duplicate, but they have different validator indices. And that just that created a problem that resulted in missed proposals for Nimbus and basic combinations in certain rare cases, but we've got a hot fix burning in for that and tentatively we're looking to do add another hot fix to that release to address an issue a bonsai issue that's related to EIP 158. I think that came up yesterday Daniel I think I saw some activity from you on that that Diego had uncovered a problem with with bonsai and certain certain situations. I'm going to be clarifying if it was forced or and bonsai or just bonsai because that would help us narrow down where to search for the bugs. I mean this is this is an issue for classic it's not an issue for main net, but because classics one of our supported networks we should support it, however it looks. Yeah, I mean this is if we if we do a full sync with bonsai then right around the time of the fork that included EIP 158 we actually get the same problem, if you're doing a full sync. Is a full sync even possible now with the CL setup. Yeah, it's possible. It's possible it's just takes a really long time. It's not not a long time but how do you get your first head node. You start with the full sync going forward as you normally would until you get to the merge block and then you backward sync from wherever the CL head is telling you is current. So it's kind of like a few parts. When I run it won't download any blocks until here's back from the CL. Anyway, that's that's neither here nor there so. Yeah, and the normal sync strategy that's how it's going to behave but if you've specified full sync it will it will actually forward sync until you get to merge I'm actually running a node right now. Okay, it's been a long time since I've done full sync. Okay. I cream I think you you put this PR PR up for for this right you were able to identify that this address the issue at least with main net. Yeah, I just prefer to have this one in the next release and maybe we will see later if we really need to keep this one. I think I'm asian will check what will be the performance impact to river this modification. I don't think it will be huge. But we need to check and I checked without and with this modification and by reverting the node was able to full sync again so. Do you think the burn in will be sufficient for quantifying any performance deltas or we want to do that separately, I guess. Just add extra data points. Yeah, I think I'm going to just do some performance testing on sync nodes because we. I don't think it is necessarily like to to sync from scratch snap sync or check point sync. As I have already some sync nodes, I'm going just to check if this river has a huge impact on performance and yeah so basically it won't take a lot of time to check. I don't maybe it's too late but just wanted to know what do you think to add also the PR I shared on chat. It's relating to a healing issue. So I did the fix because sometimes we can have a huge number of stock trace. When we have a hill so the user need to have inconsistency. But if the user had inconsistency and he is triggered. He can have a very bad log with a lot of stock trace and the node will not be in a really good state during the log moment. So I did a fix to fix that and I don't know what do you think to maybe push that in the 23.1.3. What do you have the PR number off the top of your head or could you find it? I shared that on the on the chat. It's a I see. 52 66. I don't think it's already in a release because it was two weeks ago so yeah. Okay. I see. So this is more of a cherry pick from. Maine. Yeah, it's a small modification so it's not beer. Too much to add but as you as you want honestly, I will prefer to have that. Yeah, I think we can probably discuss that on. On discord to make sure that we have consensus there. I think Simon might have. I'm guessing putting words in his mouth he might have some misgivings about adding additional scope to the hot fix specific to shabella but I'd rather let him voice his concerns. I think both 53 30 and 52 26. 366 both make good sense. So, and we can we can have that conversation on the contributors channel. Okay. So this is also the hot fix release tentatively we've talked about, like in and show and tell on cheaper this morning we discussed whether what the value was of trying to get this out. I think that the consensus on Chupa was that this would be better as a fast follow, rather than trying to rush release and get communications out for what is probably a fairly rare edge case that really only results in a missed proposal, not a consensus bug. And if 30 might be more concerning that would be more of a reason to to push that quickly but I still personally am of the opinion that this should be a fast follow. You have any thoughts on that dental. It looks like the fix for 53 30 which is the classic bug. I think the short term fix is to go back to 2210 whatever, because it looks like the bug was introduced in January. I think we need a fast track it because there's no hot there's no fork and 2210 whatever should be fine for classic. So I'm okay decoupling 53 30 and then 2313 in that regard. And maybe we, I don't know I'll let you guys decide but maybe we publish it anyway and we just say, if you have Nimbus upgrade everyone else you don't have to. You think before or after Shabella. If we don't do it and someone misses a post launch appellate it's bad for us if we know that the fix is out there. I mean at least we should get maybe not a full fix that we should have like a some build that they could switch to that we produced even if we haven't validated it. I don't know I mean it's this is, this is marketing questions out of my pay grade. I think technically we probably should pull 53 30 out of it, because there's a viable solution that doesn't involve pushing a fix as quick as possible. And I think the chappelle upgrade puts in a different category than the classic issue. Do you not think that main net is vulnerable to this particular issue, because I think it's been cleared. I've run the contract and they've cleared all of it I don't think it's vulnerable. I know there's some security issues that Martin's Martin or I think Martin isn't disclosing yet until classic finishes their state clearing, because it's related to that he's he's fairly confident when I talk when you mentioned that he's been fairly confident that it's not a main net I mean we could put it out there but I don't I don't see it being an issue because it's related to empty accounts from the Shanghai attacks. Okay. And that was, that was my, my quick take also I was, I wasn't certain if this was specific to that account state and an out of gas situation or just that just that you know whether the out of gas could occur without having that specific account state and leave leave this kind of problem state. But it does make sense that it would be specific to those particular accounts. I mean we do hot fix it but the only the only failures you've seen have been on classic and on pre Byzantium main net. Right. Okay, I think we could probably reiterate and get consensus on discord asynchronously since it doesn't seem like we want to rush a fix today we've got time to reach consensus. Well, across all the time zones. That said, work updates. Justin is not here today but I can kind of give a quick status on the inversion of control and decoupling work he's doing with dagger. I think I kind of spoke to that earlier that he is using the metric system as a incremental target for, for daggerizing a lot of the configuration that comes through basic command. And he's got a PR for that which I don't have at my fingertips. But I'll add that to the notes here. So that's coming along nicely and it's, it's pretty clean so I'm on the postman docs site. There really hasn't been a lot of interest nobody really seems to be that concerned about the postman doc site being down. I still need to talk with our docs team about using the same publishing mechanisms. And then mega EOS. Yeah, so one of the things I mentioned in the discord is I'm going to be creating a long live branch that is just the mega EOF stuff. Intended to be kept very well synced with main. The reason I want to do a separate branch rather than doing feature flags is what's in EOF is still changing. I felt better putting in when it was targeted at Shanghai and it had a fixed set of VIPs. But since it got pulled out the set of VIPs in the console those VIPs have not been fixed. So that's why I'm hesitant to merge into main when I know that there are changes being discussed but at the same time. There's desire to get all the different clients together and get them working together so we can test and validate some of these these questions. There still might be more off codes being added there might be off codes being removed. So it's it's not in a final state. You know it's probably not as final as 4844 is. Maybe a little less so. But just in that regard, because it's volatile I don't want to put it in and have it move around have different versions and apply that it's ready when the VIPs haven't been closed and a bow put on it. Right now the target is optimistically can come pessimistically Prague is the lead in Prague and can come being the the second half 2023 release and probably the first half 2024 release. So, cool. That's a good timeline if we keep it. I would like to go back to 2340 RC one is there still room for larger destabilizing changes in there. Definitely we were actually a little concerned that we didn't have enough to really warrant a release candidate series. So, I have some new evm fixes that I'd like to put in there. This is the right broad base and getting rid of the UN 256 and the operations I'm not. I might go as far as pulling it out of the storage apis but not necessarily implementation I've seen performance measurements on that and not quite as good which is kind of interesting. But as far as removing UN 256 from internal operations and moving over to the Java 17 switch statement I've seen anywhere from, you know, 20 to 40% all in. In some cases I've got like you know 400% increase on some operations. So, I'll get that PR ready and post my supporting docs for that too. Awesome. Any, any other work updates. Fabio would you do you want to discuss the layer transaction for work. Yes, this is also a candidate for the RC. The main goal is to be able to better manage gaps in the transaction pool. In the apps I mean when a sender as more than one transaction and the transaction could not be sequential by nonce. This could happen because of the broadcast of the transaction, they could not arrive in order. Or could also be a kind of spam attack in order to pollute the transaction. Actually these could also happen with only one transaction if the transaction by the sender is not the expected one for that account. So the current implementation. The new limit. Endling gaps in the transactions. So, the new idea was to create different layer for transaction. The most having a difference. First of all between the transaction that are executable so they could go into a next block. So they have the right nonce for the sender and then put the transaction that are not executable so in terms of notes in a separate layer in the memory. And starting with this. So I also created a special first layer where we keep the transaction ordered by active priority fee. And this layer is very small and limited by number of transaction. 2000, there should be enough to fit any mainnet block with the current gas limits. Then, after that, the other layer are limited by size. And this is also something relevant for for it for fall so we can have an amount of transaction. So a transaction pool that is also limited by memory consumption. The. So I tried to explain better in the PR. Where I also reported the test that they have done testing block creation in the current transaction pool versus the layer transaction pool. The results are good in the average blocks. I have 12% more transactions. It is also possible to scale to 100,000 transaction with the new pool easily because said before, since transaction are split across different layer in terms of their priority. Every layer has his own ordering and only the first player, the prioritized transaction layer is every transaction is ordered. While in the following layer, only the first transaction of the of a sender is over so in order to avoid to solve 100,000 of transaction. The deal is also to make these blocks extensible. It would be possible, for example, to add another layer that persist on this if we want to or specific layer that and block transaction in a special way. So that's a lot of more metrics in the transaction pool. And build preview a graph on a dashboard preview to make use of those new metrics. What I think is that is experimental and is only a can only be enabled on demand using flight. Yes, I fall for more information please refer to the PR and Yeah, that's that's pretty exciting that we could support 100,000 transaction deep pool without significant overhead. Yeah, the time in terms of block creation is better. It's better to say that it's better, probably because the number of transaction that we always sort is 2000 instead of 4000 by default of the current implementation. Sorry, that was muted. Any other work updates or anything they want to talk about. I can talk quickly. So I'm currently trying to to fill the flat database after the step sync. And also fasting is deputed. So I'm trying to do that. I started the PR. So we are currently testing on on girly and maybe soon on mainnet. So just to share that yes we are working on that. We are hoping that this modification will, if it's working, it will improve the performance because we don't have to keep the fallback mechanism anymore to read the tree when something is missing in the flat database. That'll be great. I noticed on other business, I think this might have been from Simon. Originally, that there's a proposal that we were replace the quarterly and bi-weekly with monthly releases a monthly release release cadence. Has anybody had a chance to review this looks like. They might have had some feedback already. That was on discord, we just need a deprecation process if we change it. So is the proposal basically that we don't do quarterly. Large breaking change releases because month I don't think monthly we'd be I doubt we would be able to do a monthly, like breaking release every month. It sounds like a spousing, like a tech whose style. Just release on a calver cadence. Without having release candidates, quarterly release candidates for breaking changes. So for a bit of color. Hyperledger initially required everyone to be on Semver, which was a total pain in the neck. Because they say that, you know, they're supposed to enforce backwards compatibility, but everyone always had broken stuff. Unless you're going to mechanically enforce calver and break API is when mechanical check the API is that you declare as public change. Then it's, it's marketing speak anyway. Calvert also solved an important problem with calling it basic 2.0 and each 2.0 is being developed. But one of the things that the other members of the TSC were concerned about was figuring out when major incompatibilities might start coming in. So the, you know, quarterly didn't seem like too frequently and having the caliber be done on quarterly numbers. So they solved that problem by a number. I think some of the people on the TSC that cared a lot about that aren't on the TSC anymore. It wasn't me it was some of the other fabric people that doing their fabric things. So if we change it, you know, there's there's a risk that the TSC might come back and ask for deprecation policy or to get some formal, you know, how do you communicate that this might include breaking change at some point. It might not really break you, which is not necessarily a good situation. So that's, you know, the only, the only thing to be careful of if you go forth with that is to have a plan in case the TSC asks these questions and have to know what's in place. Other than that, I really don't have any opinion on to what should be done. What do we think the benefit is going to be. So Simon's not here to defend the proposal but the benefit of a non quarterly of Calver release. What do we think that would be. We're not tied to a schedule. It's fair. I mean we're not that tied to a schedule currently but yeah that's it is more flexible. Because one of the concerns you brought was is it worth doing an RC release on this. So they didn't. So we keep with the patches and we just don't bump the month number. We don't do breaking changes more than four times a year. And if we're just doing minor feature ads, we keep the first two numbers the same and just bump it so it's not truly, you know, a release date version number. But every time that you break things you bump the Calver parts of it. I mean that's another option, but whatever Simon wants to do with it if he wants to go to battle and push for it I'm not going to stop them. So one of the things that we need to worry about is how are we going to communicate to users reliably that by the way stuff's going to break if you upgrade this version. Anybody else have a strong opinion one way or the other. Maybe we can have the same conversation in the APAC call where Simon will be on and he could make his case. Next impact calls cancel because of an Australian holiday. Sounds like we take it to discord. I think I'll go on record is saying it seems a bit like a deck chair rearranging I don't I don't see a lot of value coming from this amount of change. I do kind of like having a, a breaking change quarterly where there's already an inbuilt expectation that things might break between quarterly releases. From my perspective, I don't think the current process is broken in a way that needs to be fixed and just moving to Calver doesn't seem like it's going to bring us anything except the work to change that. It's my personal opinion. Well that said, doesn't sound like anybody else has any feedback maybe we can just take this to discord metrics review I think there's a cut and paste from the last quarterly. If you want to review the, the metrics I guess we can look at that contributor strength has increased by 321% and still on its way up. Do we know how we are quantifying contributor strength. I have no idea how that's measured. The charts look nice. The real. This was more of an issue two years ago. The real value when this comes to comparing it with the other hyper ledger projects. And that was when they were debating whether or not to depicate some projects or not. And I think they've come basically to a to a good place on the TSE. Some stuff like a burrows been deprecated quilts been deprecated. A couple of the struggling projects have been deprecated. So they're they're they're end of life now. This is getting in the ammunition they need to have a discussion with sawtooth about what's your future in hyper ledger compared to the other ones. The strongest projects are basic fabric and the identity projects and the identity projects are really the dark horse the the stealth project. There's a lot of attention there going on in there that's not, you know, gone big and majorly public yet so. But this is the numbers are hard to explain but when you can see them relative to the other ones it confirms and denies hypotheses they have and dealing with other projects at the, at the TSE level. I lost where. There we go. Road map review. I think we could skip that. Does anybody else have anything open forum, or we can give back 20 minutes and be done a little early if nobody has anything else they want to discuss. Okay, thanks everybody. And we'll see you on discord. Thank you. Thanks.