 Oh, yeah, everyone. GM y'all good morning. Good morning. Hello everyone. Good mix. Give it a few minutes. I'll get started. I will share the link to the agenda here. There's anything that you want to discuss that is not on the agenda. Totally okay. It's a little bit of a rehash from last time. And some other things. That's five after. We can go through the agenda. I'm just going to share my screen. For the sake of the reporting. Can folks read this by chance? Too small. I'm going to call it big enough. Okay. As usual. There's an antitrust notice. Or recording the meeting. And if you have questions, you can use the hand raised feature, or you can just come off mute. I don't think. We're going to have issues talking over each other today. For announcements. I submitted the Q2 reports. For hyperledger basu. So if you're interested in reading that report. You can, it's here. The kind of key points that they look to pull out are contributor activity, new contributor diversity, health of the project, direction of the project, all that good stuff. So. If you're interested in that, check it out. There's. More kind of forthcoming information around kind of what's next for second half. At least from the consensus maintainers. So stay tuned when I have more of those I'll share. I think it's going to be this week. What kind of our roadmap view looks like coming up. So definitely want folks input and ideas. And if there's anything that you. Are interested in working on, or if you see. That's missing from the roadmap. Again, this is just our view of the roadmap. It's collaborative process. Let me know. And we can. Kind of look see what that looks like. But yeah, Q2 report is here. I don't know why this got indented. So check that out. On release updates. We had. We had a little bit of a snap boot with 23.4.3. So we had kind of scrubbed that release and followed it on with 23.4.4. So the, this was primarily a bug fixed point release on top of what we already have with some fixes for the transaction pool. Some other items. Around sync and bonsai. And our PC. So nice big bug fix releases. And upcoming we have 23.7.0. So our July quarterly release. We are discussing, but not necessarily have any strong opinions. About setting some new defaults. Alongside our defecation items. So let's chat about these really quickly. We're thinking about making the new layer transaction pool, the default for main net. I can let Fabio discuss maybe the results of the tests and the bug fixes. Yes. So. The new layer transaction pool is now running since. Months on our production. Validators. And it's doing fine. What's up there. Are some just one. Issue. That I have found since. Now that was. Enabling the layer transaction pool. On a new note. Returned on error. And this is fixed. So it should be fine for. The new quarter of the list where what should be fixed. And this also address some of the potential. Attacks. On the transaction pool. So it's worth to. Making the layer transaction for the default. And keeping the. The old one. As. That can be enabled by a property. So in case of something. That is unknown at the moment. Some issue that we can always. Use the option to. Use one. But I said we are running this. Since. At least a couple of months on many different. Validator instances also production. Without having. Any, any issue. And. As I said probably in a previous. Call. The comparison. With. The existing transaction pool. Is better in any metrics. So. It also creates better. And block with more value. Compared to the old one. Yeah. So for this I'm in. I propose to make this the default on the new. Quarter. Have we had anybody testing this. Brown private networks. I know there was some flags that needed to be set in order to. With future non stuff. Yeah. This is. Hi there. Yeah, so we, we've been running with the new lead pool. In some of our performance tests. We did have to set the. We had to increase the default. Maximum number of transactions per sender from the default 200. But once we did that. We had to set the default. The default was behaving fine. It was stable. And once we increased that number, the performance was exactly as we had had. Before we turned on the new pool. So we've been exercising a bit. That's probably only for a week or two. Yes. So that, that's the amount. We've given a given it a trial. That's good though. At least to note that it doesn't impact. Private network too much without making note of that flag. What did you have to bump that value to from 200? Well, I went from 200 to 500. I, I, we didn't spend ages. Sort of binary chopping to see, to see at which point. We, we found a sweet spot. But it putting it to 500. I think the bottleneck was back in our call because we've been exercising our own sort of. Platform and I think the 500 meant it was back in our court in terms of where the bottlenecks were 200 was definitely like knocked maybe a quarter of our performance. We all obviously this was all with a single sender in our performance tests. I think the default we had with the previous pool if I did like you have to do a bit math to work it out. I think maybe it's 400 on the previous pool when you set the maximum TX percentage per sender to 0.1. Yes. Okay. Yeah. So my guess is 400 would probably have been fine as well. There is no issue in. Rising this value. Actually, this is there not for performance, but for the possibility to mitigate some kind of attacks on main net. So on private network, if there is enough trust between the peers, it's fine to rise this even to higher values. Yeah. I thinking of also changing this. When other mitigation have been implemented will be implemented in the future. So this could be actually a raise it to. Higher value. So when we are with trust that some kind of attacks are not possible or low risk. We need extra information, which I don't think doesn't exist in this. Thanks Matt. Appreciate that. And sorry on this, there is also a good, very good suggestion from. Adrian. To avoid having this limit for local transaction. So local transaction. Could fill the pool without any issues. Because you basically you should trust local. Transaction centers. So this, this is my feature that we cannot. And this will say remove. All the issue we had with private network user. Reporting. That the transaction pool is dropping some of the. Localities and transaction. Yeah, I could certainly see an argument for sort of something in between those where you might still want to sort of limit the pool size for local transactions, but you might want a different limit for for remote. Because even with the local node, you might still have multiple senders. So leaving it completely limitless for them to fill the transaction pool might, might still be problematic. But having two different numbers to tune. Would still would give you back a bit of control. For the local and remote cases. Okay, good point. Yeah, I think it's definitely possible to add the flexibility of two local and remote. The meet in a different way. Yes. Thanks for the suggestion. Any other comments here. Nope. Okay, let's keep moving on. The other thing we toyed with was bonsai as a default. I think this one needs a little more. We need to kind of figure out what that looks like. In terms of existing nodes and changing things around. But the goal is to move away from forests. On main net. So we might need a way to flexibly choose the storage format based on the names network you use, or if you use a custom Genesis, go to somewhere else. But the problem is, is when you run basu by default on main net or girly or the like, you start up with full sync or excuse me, fast sync and forest, which is really bad. UX. I'm not proposing that we change this in 23 seven necessarily, but we're going to do some exploration on the consensus. And then we're going to do some, we're going to do some, we're going to do some, we're going to do some, we're going to do some, we're going to do some, we're going to do some software side where we, you know, we'll, we'll kind of do some digging and see what the impacts might be. And I'll probably share some kind of document. But I presume that if we change the default, that won't matter too, too much because the folks using private networks are likely using custom configs or setting the config manually anyway. But it might cause a little bit of friction. So we're just going to look to see what that looks like. Okay. So that was part of the. The call. So no, we discussed last time the consensus mechanism, plugins and modularity. The. Links that I have linked here. Are similar to the one last time. We have the modular review. This is a. This is work that Justin had done previously. On the. Basically what we would need to get to a more modular client that has a lot of details as far as like overview potential approaches and whatnot. This mirror board is what Gary and I had worked on regarding what are the components that we would need to potentially lift out. And the new page that I just created here. Last week is around kind of our approach and working group. My suggestion is we'll need, if we want to start working on this at some point soon, we'll need to start a separate working group and not use these contributor calls to do. This. And again, I think our suggestion is to start with. The proof of work module, just because it's the most straight forward as far as validation rules and areas where it touches different components throughout the code base. Diego, you were not available for the last call where we went over these materials. The. Kind of overview that we did again, was that we walked through this mirror board and had a list of all the components that we think would be impacted by this kind of work and then to potential approaches on how we could do it. We're looking at the plugin system. To drive a lot of this modularity work. I don't know your familiarity with that system. You know, would you want to work on this kind of working group? Again, our thought process is to start with proof of work because it seems the most simple. That could be a naive approach. Do you have any comments on this? Well, yeah, absolutely. I think it's a great idea to start with a working group. I didn't watch the. The previous call, but I can't do it. Yeah. What we discussed about this, but yeah, thinking about the wagon system. Sounds great. Yeah, I think that the, that you can potentially skip the recording and go straight to the mirror and to this document here. They, we just kind of walked through them. So yeah, I think I know we also had interest from. Matt Whitehead and Michelle on this one. So would you be able to, would you be able to do that? Would you be able to do that? Would you be able to do that? Would you be able to do that? I think that I think that if you were to do something with a working group in the next couple of weeks, would you be able to plan and take, you know, We can lay out a plan. I think that there wouldn't be immediate action on pretty much anyone. We would kind of, like I said. He's out how we were going to do work and then use that as a template for the other consensus mechanisms. I think, frankly, proof of authority will be the most difficult. I certainly have availability in that sort of timeframe. Matt. Yes. And yeah, also, like from me and George from the labs, we working industry in getting involved in this. Cool. So maybe I'll do, um, I'll set something on the hyper ledger calendar for maybe a week or two from now, just to get us coordinated. I'll also share materials on the plugin system in general. We have a few workshops on like how the plugins work, how to use them to extend the client and all that good stuff. It won't get us, it'll get us kind of most of the way there. Um, and then I think again, we can use that working group to lay out some action items. But I think it's just going to have to be kind of hacking away at these, um, consensus, uh, as in us, consensus software was more than happy to support this initiative. Um, but yeah, I definitely recommend reviewing these. I think one. I'll, I'll something kind of closer towards the end of July will probably be better for everybody. Just, um, knowing some timelines. Justin, is there anything you want to add from your modularity review or the work that you did prior that might be useful context, uh, going into those meetings? You know, I'm not sure about anything in particular that needs to be covered. Um, this is an idea that's been around for probably two years now and has made some progress, but not a ton. I would encourage anybody that's interested to just start a discussion. I'm happy to participate. This is an area of interest for me. Um, I did a lot of work on introducing dagger into the code base and I think that's probably going to be, um, our preferred mechanism of achieving that interoperability with, with the modules as we define them. Um, so, you know, I don't know that I have anything in particular. I'm trusting that everybody here is seasoned software developers and understand what, uh, we mean by, uh, inversion of control and modularity, et cetera, et cetera. So if, if those concepts, um, we might be interpreting differently, just reach out to me and we can, we can discuss that and make sure that we're all on the same page. Um, I think that's about it. Diego, do you have a question? Yeah. Thank you, Justin. Um, yeah, I just want to know if, uh, you were thinking also to move a proof of stake to be a part of a plugin or something like that. I mean, everything would be a plugin. I mean, the consensus mechanism, every consensus mechanism would be a plugin. Yeah. So. Yeah, we're kind of already there, honestly, with the way the engine APIs work, um, you know, proof of stake is a notion that doesn't appear a lot in the basu code base, honestly. Um, so. That's definitely, um, maybe a thing that we clean up a little bit. Uh, but I don't imagine there's going to be a ton of work. And I think the engine API is going to end up proving pretty useful to other types of consensus as well. Yeah. And one thing we discussed last time was basically having like the default be proof of stake and still ship it with the same template process that we use for the other consensus mechanisms. It's just by default, you have to make sure that you include the proof of stake one. And we also include the others, but you have to kind of swap them out, essentially, if that makes sense. Our goal was to make it so that all of them could kind of use the same interface. Um, but by default, they are basically pointed towards proof of stake. And then we have the three or the two other kind of plugin modules. And as you specify proof of work or proof of authority, then it just kind of plugs it in at, you know, configuration time and at runtime. Okay. Thank you. Awesome. Yeah, I will, um, I'll, I'll share an email, maybe not an email. I'll write something in the contributor channel. And I'll set a calendar invite through the hyper ledger basu list thing. So the thing that creates these counter invites, I'll set up a working group. I'll pick an arbitrary date, or maybe I'll do some kind of poll in the discord, but I'm thinking basically the last week of July, if folks are around, if not, we can try to find another time. There's no tremendous rush on this. Um, but we will, I'll try to work around. People's vacations and summertime plans and whatnot, but I'll, I'll, I'll, um, Matt, I will share this. In the. The contributors channel. Awesome. Sweet. Um, The next one is around technical documentation, quests. Um, We don't have data on the call. Um, but another. Uh, kind of put out there to try to work with us on documenting the EVM library and some of those features there. We did have some recent document improvements in there. So I think we're getting started on that one, which is great. Um, as we modify the plugin system, I also want, or if we need to modify the plugin system, but as we use it, I want to make sure that we're documenting pain points, challenges and kind of approach from a developer perspective. I'd like to use this exercise with the consensus mechanisms to one, identify pain points with the plugins and to identify. Like just kind of working process so that we can kind of provide allies the docs page. So I'm going to put my name next to this since I'll be following the, um, the working group with consensus mechanisms. But again, the goal is to kind of have a new approach around plugins in the second half to try to encourage folks to look at the plugins for client modifications for other chains and other paradigms, whatever we need, as opposed to kind of forking base who are doing some crazy stuff. But in reality, I think the thing is we need to make sure we have better documentation around that. So that's part of a kind of plugin revitalization strategy that isn't much of a strategy as it is, let's start using the plugins ourselves and kind of dog fooding and then putting more detail back out into the world. But anyway, I will be probably following up with all those folks who use the plugins to get some more notes going forward. And if you've used, if you have on the call, if you have any questions or questions, maybe what I'll do is create a new page on the base wiki where we can start to collect feedback around the plugins. That's actually a great idea. Plugin system feedback and pain points. And details. Again, the goal is to get more people to try to use this as a modification point for basic as opposed to using either another client or. Trying to fork the code base. The trial shipping stuff. Do we, I think this might not not necessarily is premature, but I think we should start to think about this as a part of our documentation around the plugin upgrades is what we can now do kind of with some of the state shipping stuff. Gary, this is more a question, not question, but this is more around you and Kareem's work. It's maybe putting some of that back in the documentation. And I think anytime one of the things that we should consider when we're developing plugins or extending the plugin API is documenting specifically what we've, what we've moved into the plugin API out of the main code base and, you know, the motivation, what can be done with it and just kind of act as a point of entry for the plugin. Maybe we can do that directly in the code base that defines the plugin API via markdown or maybe we need some external repo documentation. But one of the things that kind of stands out is that we could arbitrarily move anything into the plugin API. And we probably need to have some, some design goals and documentation around that so that we can all be on the same page about what needs to be in the plugin and what the API could look like that way. It just doesn't end up like a kind of a random collection of interfaces we wanted to export. Maybe also how the plugin is communicating with Bezu in order to not have different way for the consensus part and other way for tree log shipping or something like that. Exactly. Yeah. What would be, sorry, can you like repeat that just the way that the actual API is interfacing with Bezu? We need to make sure we're documenting. Yeah. We had to specify the interface between the plugin and Bezu, how the two components are sharing information or communicating. Just so, just to not have someone using maybe a specific library and another plugin is using another way to to exchange some data and like that it would be just random code for each plugin. Yeah. Yeah, maybe we can start with the try log shipping as like a point that we can use to show like what we should be doing as far as updates for the interface. Any design goals that like what are the design goals of expanding the interface to the try logs and basically new documentation changes. I can also start some of this too as well and get some of the technical details from y'all. Okay. That's some notes there. Any other questions here? Cool. Yeah. More to come on this. I think it's just going to be a moving process. Kind of a moving target. Now on to deprecation. So we for these don't necessarily all have to go into 23.7. But we're looking to deprecate three things, one of which is already committed. So in 23.7.0 we're already committed to removing go form compatible privacy modes. These are not even technically documented in the Bezu docs. So we're hoping that no users are actually using this but we'll be removing the go form compatible privacy. Kind of interfaces in 23.7. Any. Oh, I have that PL or PR link for that. Oh, perfect. Yeah. If you could probably in the back, I'll throw it down there. Oh, here we go. Is it a 5607. Amazing. Thank you very much. Also database version zero. So we currently have versions one and two one being for us to being bonsai if I have it correct. And database version zero is kind of a really old legacy. Version that we no longer use. And this is a PR to clean up. That version zero. Alongside some other stuff. There is also other modifications in this PR. So maybe I will split this PR into two in order to have on these version zero deprecation in one PR. So it will be cleaner. Gotcha. We're also looking to remove world safe pruning on forest nodes. This one might be more of interest to folks on the call on the private network side. Because the removal of this. This feature never actually was thoroughly tested and thoroughly worked super, super well. But our people using this on their network, you can't really prune. Like the QBFT IBFT two stuff from when I've gathered. So I don't think any of you are using this, but I'm not sure. And if we just start calling this not currently. No, I can certainly say we know. Yeah. Awesome. This also is connected to making the bonsai the default, because if you want. To save space. If your use case is to save space, then bonsai is the preferred solution. While if you use forest. And if you want to have also the history. And so pruning should not be an option for you. And I haven't seen recently any discussion about this feature on this code also. So I hope that we can. And they pre-kate for the pre-kating to make. Maybe if someone is using it, but we are not aware. What do you think we can put a warning. Or be more aggressive on that. And say, if you want to enable pruning, you have to pass twice the option, something like that. Otherwise. Yeah, we have already this. We already have a deprecation notice. I think it's okay to. Okay. And this is a bad printing will not work. So database or something like that. So. I think we can be aggressive and just. We can put. In the next 30, if we. If there are. If there is agreement, we can put a warning. If you have this option enabled. So. Notify by warning that this is deprecated. That will be removed in a future quarterly release. Okay. I can propose a PR for that. I can also add the warning and to deprecate the. Add deprecate annotation to the futures. To the future. So. It sounds good. Looks like we have agreement on pushing these all into 23 seven. And again, we'll have release candidates. So as you test the release candidate, if there's any weirdness. If there's any confusion or anything else, just let us know and we can always revert. We can always revert them or just have a discussion and figured out. Could you remind me what the tentative schedule is for 2370. Yeah. So since we had a, since we just had 2344, like last week. Our goal would be. A week. Probably to start burning either the week of, sorry, the Friday of next week. The timing is bad with some, with each CC and some other stuff. But again, we, it's a release candidate. So we can. Just kind of put it out there and see. But my presumption would be that we would cut something next Friday. Gary. Perfect. Stay. Just going to put this in here. Burn in. Friday. 20. First. What's this Friday, the 11th. Yes, 21st. Awesome. Okay. The last thing is checker. Justin, this is all you. Okay. Thank you. So I don't know if folks have been following along on a very large, very long list feature branch for. Thank you, Matt. But we're introducing some, you know, a whole lot of new stuff for the new feature. And one of the things that we have now is a data gas. New type of gas. Which is typically represented with the 64 bit integer. So, you know, I think it kind of makes a point in the PR where he's saying that, you know, he got a lot of performance improvements in the past by switching to primitives from more strongly type things that extend by the race via to any. So he kind of raised this at the performance concern. I raised it as a, you know, the counter to that is a readability concern where. When the. Excuse me. You know, you know, you know, you know, you know, they kind of assume that it's assigned long, right? And there's no good way to communicate to them that, you know, they shouldn't add them together. They should be using bitwise operators instead, et cetera, et cetera. He suggested as a compromise this thing called the checker framework. I don't know if anybody hasn't experienced with this. It has been around for quite some time. It's been around for over 10 years. It is active. And so. I don't really see a problem with introducing this to the code base. I just wanted to, you know, bring it up on a call. So everybody else had a chance to weigh in, maybe express any past experiences with it, et cetera. But what this would allow us to do basically is to treat certain logs as. Sign, excuse me, unsigned integers, right? And use annotations on them to make sure at pre-compile time that we're not making any unsafe assumptions about what's in that 64 bits. So questions, comments, concerns, open discussion, et cetera. Is that like a static analysis type tool or how is, how would we integrate that in with the existing CI and tests? I think it's like air prone or it's like a set of annotations. Sounds good. I mean, sounds uncontroversial. Is there any, any downsides to it? I've only been reading about it. So I'm not sure, but I haven't read any. It's possible to have a PR against the current state of the EVM just to see how that will look like. Before introducing this PR. Absolutely. I would not do it on this PR. I don't think because this PR is massive. Okay. So yeah, I would definitely want to do isolate it somewhere else and not include it here. So we're talking about using the checker framework on Dan has already committed work using primitives in the EVM, right? Exactly. Yeah. Awesome. That is a little scary to me. So I think checker framework sounds awesome. Great. As ever feel free to reach out to me with questions, comments or concerns. And I think that's all I needed. Sweet. I think we're ahead of time, which means we have open discussion. Any topics in this kind of open forum thing. We could review the metrics, but I don't really want to. At any time you can review these metrics. They just show. That people are using basic, which is crazy. I'm cool. This is some fun data. I don't really know. Any topics, any, any open topics. We can also go. Yeah. So I have like, so we are working on this benchmarking performance of PSU using caliber. So this is a mentorship program, a mentorship program. So basically one of our mentees working on it. So basically what we want, we need to, you know, add a few workloads into caliber and then benchmark base using it. So we did it and the performance is not that great right now what we are getting. So let's, I just wanted to, you know, get like, if you are doing this performance like regularly, so is there a suggested setup or something that we should stick to. So basically we are focusing on this QBFT and IBFT. So we, we didn't like do this kind of load testing on, especially on the private networks. We like at some period we tried to implement the caliber test, but we had some, some issues and then we just, just gave up. But I'm definitely interested what, what are like the results you had and, and see if I, if I can help. And yeah, so, yes, because our focus, like the last mouse was on, on mainnet. So yeah, if you have some results on, on, like on private networks, yeah, I can take a look and see if you can find something. Okay, yeah, sure, sure. So yeah, like, with the result about the report or resultant report, we will be sending a mail then, or in the, yeah, basu, meaning list, whatever you find. I would suggest discord. Yeah, I think discord is better. Okay, yeah. I think there's probably some, some good guidelines around hardware spec performance. So some, some basic things that might, you may or may not be using NVMe, for example, the different JVMs and memory management options that are present. I'm not sure if you have looked into the performance tuning guide that we have, I believe on the wiki currently, but that might be a good place to start for the baseline test harness for caliber. Okay, yeah, we'll look into that. Yeah, also, we find something very interesting, but we, like we, we haven't enough time, like to dig into the details and like do a real implementation. I just did a block. So if I remember correctly, it was on QBFT. And so this is more related to block processing. And we found that the block is actually processed three times. So like each time we import a block, we do it three times. So we are doing exactly like the same, we are executing exactly the same code, the same transactions, but by three times. So I did a block, like a proof of concept on like just cache some data and avoid doing it three times. And we had like good improvement. Instead of processing three times, the block was processed only twice. And we had like 33% improvement. So if you guys like have some performance issues, especially on, I would say block processing, I can share some data and some insight on that part. If you have some. Yeah, that will be great. Yeah, some point to work on that. Yeah, if you could share or PR or PUC documentation something. Okay. Yeah, that would be huge. I know that we did a lot of investigation around performance work that definitely trickles over to the proof of authority side, but we didn't have time, like Amazian said, to like focus on those improvements. But you know, if you could like take those PRs and kind of run with it, that would be great. And I think that we can definitely collaborate on that stuff. Yep. Sweet. I'll include the PR when I get it. Any other questions here, folks, comments, concerns. Awesome. All right, I think we can call it there. Thank you very much. I will share the notes in the contributor channel. And if you can, you know, have any follow-ups there. There's a lot of good stuff that came out of this to reiterate, I will send a little bit of like a, not really a poll, but like I'll suggest some times where we can have the working group meeting around modular consensus. I'm again, looking at the last week of July as our likely starting point. If that doesn't work for you all. Maybe we can push into August. But yeah, there's no giant brush. And we'll do that in discord. And please review the materials that we've shared. They, it's a good starting point. And I'll dig up the, I'll dig up the plugin system. Workshop that's been done and share that as well. Thank you. Awesome. Thanks everyone. Thank you.