 Hello. Hey, Matt. How are you? That's bad. I put some comments on your document this morning. Yeah, thanks. Yeah, I just saw some notifications. Cheers. I'll have a look at those and see if I'm sort of pings and replies over. Cool. It's it's mostly agreeing with what you're saying and agreeing that there's questions we need to answer. But yeah, it's it's good food for thought. Absolutely. Hello, everyone. Hello, Diego. Hi, everyone should be getting some folks from my team shortly. One sec. Let's see if I can get the last folks post a few links in the chat while we wait. I'm sure a lot of you have seen these but if not, this is the overview of this effort. The first link. The second is a component diagram for some of the things that might need to become modules. And the third link is a potential approach on the first half. And then we have a potential first approach on some things that we want to do. So I'm hoping that those have been shared earlier they're all in that base who modularity channel. I think we have a critical mass to get started Gary is kind of the last fold up and he there is. All right, sweet. So we have everybody here, like I said I shared a few links. All this stuff is in the base who modularity channel the purpose of this specific meeting is to kind of align ourselves around the goals language that we use when we're discussing this, some potential approaches, and Justin has kindly basically helped write some potential ways that we can do a first implementation. You know, I think that we need to start a little bit at the high level around around our goals and I think that, again, there's some opinions that have been put into that page that don't necessarily need to be exhaustive. The way that we in the consensus team have viewed the goals are around basically resolution of tech debt, better distribution of the client and better client modification. And those are broken down a little bit but the, the one that probably needs more explanation is client modification. We currently have the plugin API that allows for modifying a variety of different things and adding different components within base who. But, you know, that's kind of allows for certain things that fall within the API there's a certain specification, it doesn't necessarily touch every area of the client. And, you know, on the other end of that is kind of module modification right if we create modules within base you and module boundaries on inversion of control kind of approach, we can replace entire modules with other things so for looking at other clients, if you're taking a different clients approach to storage or sync or something like that, or independent modules, kind of in that approach. That's what the differentiator is there. There's also the remote API as approach where we can use things like the engine API to drive the client itself. So I encourage you guys to read those goals I'm bet most of you had. Are there any kind of discussion points that folks want to get into around step before we proceed, or any other goals that weren't mentioned. Matt, sorry, could you could we could we talk about the middle of those two options just just for clarity, if that's okay just for a moment so I sort of have an understanding of the plugin API is little and have written a sort of a dummy hello world one just to understand a bit better and the, the remote API is one with the engine API I think I have a feel for that. What what can you just discover a bit more about what you think the modules approach looks like on the ground. Absolutely. So, presumably we have, you know, functional pieces within basu. If you look at the kind of other links I shared there's the diagrams and there's the kind of vertical slices that Justin has called out and the other page. So you have you know functional areas of basu that handle things like peer to peer, you know transaction validation. You know, the EVM the consensus mechanisms, all these things are currently wired together for the most part in a kind of monolithic approach. So if we were to instead of using the plugins to kind of modify the ways that those work in basu today, we could take an approach where we create very clean kind of module boundaries where there's expected interfaces there's understood kind of moving and you have these modules that can work in theory within isolation, which also allows them to be swapped out for other potential modules that exist elsewhere right that adhere to the same interface. Or you know, with small modification can be made to work with the other basu components that's kind of what I'm getting at unless does anyone else have a characterization there that they'd like to bring up. The way I think about it is also that you have the entry point that is custom for project so the entry point will glue models together. And so if you want to have a different distribution, you just create it assembling different models why they plug in is at the bottom and say it customized some kind of feature, but you can't really drop any model. If you don't want, for example, the RPC stuff or the transaction pool you need in your distribution everything you can customize them. Why in the way I understand the module realization is that you can just create your entry point that doesn't include one module or something like that. Okay, yeah I guess I guess I was trying to understand which of them fell into the category of things which would start to live outside of the core Bessie code base. And I guess I, I see like the plugins falls into that right if I'm a third party and I want to build a custom plug in. Then I'm going to have that code in my own repo and I'm going to drop it into a Bessie install the remote API is one I think that's that's probably also the case. I guess the modules one sounds a little more like it's more about an approach to the monolithic Bessie code base that makes it more modular, but still is slightly more tended towards maintaining a lot of that code in the same sort of repo code base. Do you think that's am I ever simplifying that last one do you think there's more scope in the modular modularization approach to have things hosted in different repos. This could be an option for me. I also see a mix of the two approaches so the model are independent also develop independently. Each model could have its own plugin interface and could be extended in that way. Okay, yeah, yeah, thanks. Yeah, yeah, so I don't want to distract. I think a lot of things are option that we this is for discussing how to move forward. So, yeah, yeah, absolutely. Yeah, we don't have any. I think already any solution about how to organize the work of the code. More than just moving against different repos, I think some of these stronger module boundaries will help centralize certain code like there's handling of the privacy pre compiles sprinkled about the code base. And this will force us to centralize that and rationalize how it interacts with the rest of the system, a little bit with the proof of work as well in some of the mining integrations. So if we require them to interact with the module, then we have more surety that the privacy code has a limited impact rather than requiring special handling. So that's the big advantage I see life's health wise for the code base going forward and that kind of feeds into the tech debt. And those don't have to be done with private repos, but it makes it possible, not private repose but separate repose, but it makes it possible I mean that's a tangential mono repo versus multi repo debate. And modern repose not going to win it's just a question of how much you put in the main repo. Yeah, I think that helps clarify yeah thank you yeah. And last year I know we had some discussions of goals, like I said internally, is there anything that we're missing here that people think would be valuable to include or to help God our thinking, I think, I think, of having an expectation that this is going to be an incremental effort rather than a big bang effort is is good to good to have otherwise this could end up being, you know, along with massive feature branch or forked repo in order to make this work. I think that needs to be mentioned as a goal or is that really more of a vine. Yeah, it's kind of a vibe but I think as a goal to frame it in the context of a goal is incrementally mergeable. I'll put that here. Because we can't do it all at once. No big banks. Yep. So incrementally mergeable no big banks. That seems fine. I'm spacing. Yeah, maybe. Yeah, digestible reviews. Incremental was the word I was looking for. Yeah, make digestible reviews warrants inclusion. That's a big one review cycle to warning inclusion. Okay. Alright, so anything else on the goals here. I think we've used enough time to kind of level set anything else about the language as well are people comfortable the way that we're discussing this because I want to kind of lock it in so that we can continue to discuss about it in the right context. Diego, go ahead please. Yeah, thank you. I'm seeing like a bit overlapping between the, I mean, maybe it's something that we are going to discuss in this, this meeting but about plugins and modules I see plugins, something maybe more of a runtime or something that the the core will discover. I don't know maybe using something like SPI or something like that. While modules are more something like, like at the building blocks of what we are doing I mean the clients or the PO and for four clients. So I don't know if we are going to use both concepts towards the modularization. I mean, for clients. So what do you think about this. I think that, you know, it's, it's a good distinction to bring out right like modules are a compile time concern plugins are a runtime concern, maybe. I think I don't want to say that they're orthogonal to each other but I don't, I think they're just different tools in the toolbox. And we probably all need to be on the same page as to like what is the rubric for making that division like the following three symptoms indicate this is a plugin need versus a module need. It's something that we clarify as we more sharply define that language. TLDR I think we need both and we're going to have to come to consensus as to when organizationally it would be really nice to be able to have plugins implemented or at least you know organized in a module for in our sub module format where we have these these bounded contexts or at least these sub projects that categorize the functionality, and our plugin API is just a single one of these sub projects. And we're going to end up with this dumping ground of a sub project in the plugin API if we just dump everything there so I think that we should have, I think we should leverage module boundaries. And then we're defining these plugins that way we can at least have them someplace where this is not just a pile of code and some sub project that is completely dependency hell. And this because looking at the protocol schedule stuff how to extend my, my biggest concern is how to organize the amount of classes or data types that will be available to the plugin API so to avoid making a mess moving without any clear organization so yes I think this is a very open question. I like the approach of resembling more or less the the code model that we have for the plugin. Maybe this could also in the modularization. Yeah, did we cover your concern. Yeah, yeah, I think so yeah thank you. Let's see if we can move on to page here. This is the second document that I mentioned. Again, only one potential approach to what we could do the components that Justin has broken down here are pretty large, right. So what we have here is, as of today, we have kind of a storage module that's run through the plugins for, you know, providing pluggable storage mechanisms but it's used in this case for rocks TV. No necessarily plugins for any of this stuff or specific modules for any of these other areas like sync transaction management, the EVM the EVM is a little bit of a special case. And maybe we can have Dan or discuss some of what that looks like in the context of this discussion. And then we have P2P and consensus as I mentioned. We have more on the right hand side, like components that we have that are useful in a lot of contexts. And to Justin's point up here somewhere. We don't necessarily need these to be modules but they should be accessible in a way that the core is able to provide them to other areas of the code base. So what are the cross cutting concerns I see like some of those things like cryptography and serialization, the observability and version of control those guys I see as being used in all of the different modules like cross cutting concern in that way but API is an RPC. If you look into a lesser degree configuration. There's there seem like they're like meta, as opposed to intra. I don't know how to mention that like RPC is not usually a consideration. I guess in some cases as an execution. I know it's RPC is a weird one for me as all. Yeah, there's a couple of I think you're right I think there's a couple of things that don't quite fit as well as the others. Yeah, you know, it's imperfect. What do you do. Yeah, we could break out third catalog or category, but I wouldn't know what to call it. The difference in cross cutting versus business logic. Do you think that that would have implications for how we would modularize or those. Yeah, I think I think it does actually I think a lot of those things that are cross cutting. I think you're probably safer considering the library as being the unit of delivery. As opposed to the business logic where you need something a little bit more sophisticated because you're going to be composing them and you know providing them to each other and building an object graph out of them etc. So I do think that there is a real tangible difference in how they're implemented. Any more comments on this. Do you think they'd use the same mechanism or was there a different method of extension for something that's meta as opposed to intro component or intro intro module. No, maybe, I mean a lot of those things are functional. You know, a lot of those things maybe don't have dependencies like the dependencies of a crypto library is just like lights. You know they're working on Rawls and primitives and I think serialization is kind of like that too. I think there's there's nothing to stop you from using, you know, the same mechanism that connects all the business logic to also provide some of the cross cutting concerns. There's nothing that would prevent you from doing you could totally do that I don't see it as necessary. And if I was prioritizing the list of things that I need to modularize. Those would be much lower on the list because I don't see a lot of lift there. And they're kind of find the way that they are. And the web of dependencies that runs through them is not as problematic. I'm just wondering if there's like, if we think of the, I mean, Dano you might have been one of the original or the original architects of the plugin system. It feels like there's like a dividing line, like a very fuzzy dividing line between what, what should be a pluggable and modular versus something that would be like implemented as a distribution, like, there's an alternate implementation that's leveraging portions of basic. And we're wrapping it up differently. Like, I'm wondering if there's if. Yeah, go ahead. The original intent of the plugin system dates back to the Pegasus days, where we're looking to build value ads that we were charged for. So we weren't really looking to modularize the core. We were looking to add features that might produce revenue streams. So that was the foundation of, and the limits of what it needed to enable. So I didn't go too deep and we didn't focus on doing hardcore modular modularization of the code. You think that it could be made suitable to that. Or do you think that there's probably, yeah, okay, it's, it's got a startup life cycle to load things. It's just rebuilding the core loop to bring those components in from the plugins. It'll be the tricky part. I think some of that's already been done with the rocks DB. And if I can say it another feature, there's like an encrypted rocks DB that somebody said they'd pay for. I don't know what the ultimate outcome of that repo is, but to substitute the storage layer with rocks DB. And I think I use some of it and I was experimenting with other databases. They all sucked none of them shit. So that's why we're still with rocks. Got it. We've been, we've been doubling down on the plugin API lately, but it's with, we've not had a like kind of an overview, we've just been kind of being reactive to what needs to be pluggable. And it's, it's kind of one off here one off there. There's like the trilog shipping feature and the RPC extension feature. Yeah, it seems like we're at a point where we, I think we need to have a top down approach. I mean, if not top down approach, then at least a paved path or a notion about what belongs as a plugin and what belongs as a separate distribution that's leveraging the core basic components. So when I'm gathering the, there's kind of like two ways that Gary that that's kind of being viewed right. So there's the, the, we could essentially level up the plugin system, like Dano says to incorporate a bunch of these modules at startup, and then basically turn the client modification portion of the plugin API into something else. Whereas we make kind of that the ability to use like to, to replace and load and use these modules kind of as a core component of the basic client and then we basically consider the plugin API. Something different. Are you, are you kind of following what I'm saying, like just that modification portion. Or if you need to create new modules, etc. Because if we don't go that route and we go the route, or excuse me if we exclusively. If we ignore those kind of module batteries that were mentioned in the previous section, like when we're talking about privacy and PW code, I think we'll lose out on the tech that resolution that we're looking for. In certain instances, unless we modify the plugins to because it's also a lot of complexity when you're, you know, we're as we're bringing in these modules right. Am I making much sense or am I. This is hard to follow. Yeah, I think it makes sense. It's almost like a spirit of the thing like you have and make clear intent. We're trying plugins right like we need to find a way to make a distinction between like we're plugging together a bunch of these basic modules and we'd still would need to find what those module batteries are and make those interfaces clean. And etc. etc. In order to, like Daniel said have predictable testing and impact of code and all this other stuff that which were, which is one of the primary reasons for doing this. So I think it's I think to Diego's previous point it's kind of a mix of the two approaches, but it would change the way that we think of the plugins today as like a means to modify the client right. See we got a hand up. Yeah, I'm wondering it might be a question, but what's the value nowadays for the plugin system. Okay, I could speak to that with with current context base who is being used for the linear L2 largely because you know, most L2s are just kind of doing their own bespoke forks of death. It's sort of a nightmare to is not that much of a nightmare but it's certainly a pain to just be reactive to a code base that doesn't want to incorporate your changes. And basically what we don't want to do is pollute basically with layer two concerns that are specific to certain chains but we also want to support that use case. So like I was saying, there's a as a trilog shipping bonsai feature that's been made into a plugin. There's like some specific zk avium tracing that has been implemented as plugins. That's really the goal is to be the kind of support the modular blockchain use case, but still leave base who as primarily a main net client you know main net compatible Ethereum client without turning it into this this like nightmare of special cases like kind of currently exists with privacy. And my opinion. Yeah, it's really just kind of a way of keeping the code base clean and separating these concerns out that are usually network specific. Okay, but maybe that those cool became modules eventually. I don't know, maybe it sounds like we were using the plugins API because that was we already had. Maybe I don't know. I mean, it was there. I mean, it was something to use. And it allows us allows us to do multi repo without having to have everything in the mono repo. I mean it also allows them for some degree of. I don't want to say private but I mean, I don't really over what what the, you know, you don't have to get something merged into main to have specific pluggable behavior. I think that it's kind of empowering to developers who wanted to not necessarily fork basu but use basu on different use cases. Okay, thank you. That's just my opinion. And so I think distributions are an interesting aspect as well. I'm supporting these different chains. So I think these are two approaches that we could. We can embrace both approaches. And it's just like when is it appropriate to do one versus the other is what's a question in my mind. From your experience with that, what could be the individual consensus algorithms, the implementers plugins today from from what you've done recently do you think that's feasible. So, I don't think so. I'm pretty confident in that, but the engine API is a really much, I mean it really is just a stripped out how to interact with an execution engine right. So I am, my campaign is to kind of treat that as the entry point for consensus related things. I don't expect that to be complete but I think it could get us to, you know, 9095%. So, you know, to entertain the concept though, there would be a lot of things that would have to be added and exposed through the plugin API to enable the consensus mechanism to be there's a lot more work to take that path than there is to adjust the engine API extend or expand it and allow everything to go through there. I need to just give us more. Okay, thanks. That's useful. What do we think then about kind of, I think we're trying to do a one size fits all approach when we have a few different tools in our toolbox. The one thing that we don't necessarily have figured out is the module approach if we're thinking about client modification at like, I think those goals are a little kind of weirdly stated but think about it this way right or one of our big goals is is client modification in order to get the other two goals is better distribution and a reduction of tech debt, hopefully via whatever we've done. So the three tools we've outlined are the base of the engine API, the plugin API, and then this third so called module thing, which we don't necessarily have a tool for can we discuss what it would look like to not like one size fits all tool How can we potentially use all these tools to accomplish all of we want against these different business logic areas and we still have kind of the to flesh out that third one right we'd have to do a lot of work to define all these module interfaces. What makes sense for these abstractions. And then how do we accomplish that right is it via plugins is it via something new. Do we modify the plugin API. Do we have the option to have like kind of a compile time modular approach or do we keep it and then we make the runtime modifications more like what we think of as the existing plugin API and we kind of retool how it works. Any thoughts on any of that I think that we haven't really fleshed out that third notion of kind of what it would take to get to that module boundary clean interface kind of loading approach that kind of compile time approach. Do we want to make module boundaries plugin boundaries, because I think that an easy way to do that would be to have like an interface project that that way we don't end up, you know, with a plugin project that has, you know, this spider web of dependencies and then a bunch of unrelated code. I mean I don't know if we want to have module boundaries be, you know, modules be plugin boundaries. Right. Right. Yeah. But I've, I think I've written a lot on the subject so I'm going to hold my comments here from others. You should go ahead Justin. Yeah, it's gonna say. Okay. So, when we talk about modules and things like that you know I'm, I'm thinking of this as a software architecture problems with compile time concern. And I don't think all is lost like I think there are a lot of these divisions are already there invasive I think that we have a lot of those things they just need to be updated and sharpened a little bit and maybe clarifying the interfaces and doing a little bit more thinking about that. And there are, you know, to discuss packaging, the greater modules there's a lot of them, and they're reasonable divisions they might not be, you know, might need to be updated that might need to be rethought and and revisited, but I think they're reasonable. So, I do think of this as a refactoring problem. And I think it's mostly orthogonal to either exposing the results of that through the engine API, or the plugin API, or maybe X some other thing that exposes it, but the underlying work to sharpen those interfaces is make those implementations. The best coupled is is a big, big chunk of work that we're going to work on iteratively. I have, I think written a lot about how I think inversion of control is the approach that we need to be taking to reduce the coupling. If I was to, you know, treat this as a ongoing project a death march if you will, I find a way to measure that coupling over time, and make sure that it's going in the right direction and that the things that we're building become less and less coupled. Once we accomplish that, we can expose them through a plugin API, or a rest API or a RPC API or whatever it becomes. It becomes, you know, not as big of an impact to move those things around and use those. So, I think I am planning on, you know, taking a pass at this in small ways, where I find these problems, and specifically address the coupling. The document that I've been updating, which is very rough, and I am, you know, smoothing it out as I go. I have a number of examples where, you know, if you want to add metrics capability, for instance, to a deeply nested part of the code. You may have to go through and make, you know, nine 1015 different classes aware of what a metric system is just so it can pass along to the place that it's eventually being used so this is, you know, this is a textbook inversion control problem. I've already introduced dagger in the VM tool. I've already introduced dagger very small fashion into base to itself. And I think that's probably, you know, I don't want to start this whole conversation on what tool do we use, but we need something. And that's kind of there and I started to adopt it. So, I think, for next steps, we're looking at find a place where base to hurt you. Try to fix it using an inversion of control pattern, bring it back to everybody else discuss the merits strengths weaknesses of it, and make a small PR that we could move forward. And that was a condensed description of the things that I've been writing and it wasn't to rambly or broad thoughts questions concerns. I think I think personally, I know I've hit that kind of scenario that you describe the metrics one is a classic one. I suddenly decide you need to populate a metric from some, some new bit of the co base and I've certainly seen that recently and not in Bessu but I think I think generally I buy into that approach from what you've described. And I think it's, yeah, I'd be very interested to see how that was sort of a first place of introducing that elsewhere would be, yeah, having a look at that. So, will the inversion of control mechanism dagger in this case how are we going to make that work with runtime discoverability via plugins, can we use dagger can we use that subsystem to to allow for runtime extensibility. I don't know, and yes. So I don't know exactly how we would go about doing that. But I don't think there's anything about any of this that constrains us or prevents us from doing that right like you can say, you can have dagger have compile time far wired. You know, here's a collection of implementations that are specific to each classic and here's a different competing collection of implementations that are specific to main net. And now at runtime, choose the ones that you want a third thing to be providing right so you have these modules in your version, it's a service locator pattern you can extend the service locator pattern and say, before you start choosing the services that you want to locate go down this runtime specified branch, and now choose from those. So the specifics of like how we're going to implement that I don't really need to prescribe that at the moment. I don't see any reason why we would be able to implement that with dagger. Yeah, I think until we have that though then we will, you know, there's going to be the clash of runtime discoverability versus compile time discoverability. So we should prioritize that I think is making sure that we know how we can use dagger specifically since that's our IOC of choice. How we can do that at runtime. Well, I don't disagree, and I don't really see any risk. I guess the only question would be like okay well cool like if you build the runtime mechanism first you got nothing to actually expose so it's kind of like, you know maybe it goes third, I don't know. But you know that's kind of a scheduling implementation detail that the notion that well if we can't make that choice at runtime anyway, we would want to know that sooner is compelling. What do other people think that sounds like try and see. Yeah, I would think try and see is a good approach. We have a identified area that you'd like to target, or we have to do a wholesale selfishly. The interfaces that are necessary for a linear scheduler would be timely. I think you're nominating yourself in that case Gary. I'm nominating Fabio actually. I think we should frankly go for something that as maximally like useful as a demonstration. If we're trying to get a collaborative effort where we get a bunch of different people working on this so that we can chip away at the different use cases. What is a single slice that would be kind of maximally useful and demonstrating that kind of approach. What do you think Justin. Can you rephrase that I'm not what what what say we pick something to like to some part of the code base as an for an inversion of control pattern and you want to demonstrate like you said a small PR that shows this change. If our goal is to catalyze everybody on this call and have them understand what we're talking about so that we can all go off and work on other modules. I picking something that maybe touches privacy. I'm going to go back to the main net like multiple use cases so that we can use it as a demonstration point for this working group and come back together, talk about the goals what we accomplished and like the impact on the code base, essentially. I wouldn't do it that way. Because if you think of it as your, your, you're solving an object graph problem. You know, breath is the enemy. Like you actually want depth, I think. So we actually take the opposite approach where I find one thing that goes deep into the code. And instrument that way you know maybe the metrics, you know provide metrics to something that doesn't already have it. And I think that that and the document that I'm working on there's a PR that was that was merged and you can see it, the thing that needed to be instrumented was fairly deep in the object graph. And I think that if I'm imagining us implementing this incrementally that's, that's a more analogous demonstration. And I think if we demonstrated doing it broadly and it touched a lot of things. It kind of wouldn't help us be confident that we could iterate on it and small PRs that we can, you know, keep, keep getting into main and incrementally improving the situation. Okay, I will not claim to choose or, you know, do you think that I think it's. It's a more ambitious target but I think, you know, we talked about maybe not making sluggable proof of work but like, we have enough examples of consensus mechanisms, proof of work click IBFT QBFT. And if the goal is the long term goal is to transform those into users of manipulators of an engine API. That seems like that would be maybe it may as perhaps a large first go but at least it's well defined, and there's multiple existing implementation so it's not like we're going to be, you know, going off on a tangent somewhere with no other examples and making it too tightly the interface is too tightly coupled. Hmm. Make click use the engine API that'd be pretty dope. That would be that would be harder than proof of work I think you think. Yeah, just because of the protocol modifications and the P2P stuff. I think that the click makes. Okay, you have a question. And it was also a suggestion about another possible things to see as a starting point. That is not big, but is used at the many different places. The JSON file, the parsing of the JSON file, the querying the JSON file options. For example, in this we have custom. Some custom data that is relevant only to privacy, IBFT and so on. And it seems to be a kind of prerequisite for making good so on the moralizing the consensus, but also click as its own entry in the JSON in the Genesis, right. From what I saw, literally, it seems that this is not something so big like the protocol schedule seems more limited and quite used inside the code. What the discussions that were happening in the discord. Yesterday. There's some back and forth around the protocol schedule. Yeah, that's something because I'm interested in moving forward with customizing the transaction validator block validator and the main challenge that I'm finding is fine to reusing the current plugin API. The main challenge is how to manage the growth of the plugin API in a clean way. Because maybe the goal at the moment is very wide. So make all the protocol schedule as a plugin. It's also possible to reduce the scope and put the focus on the section validator block validator and but all those need to be scheduled according to milestone so some kind of protocol schedule functionality need to be exposed in the plugin API. This means that a lot of interfaces that types could be moved there. But how to do that without say creating just a mess of different interfaces object. Because at the moment, we, we, I don't think we have a clear organization there, maybe I'm just missing the history. Usually the plugin were touching limited surfaces so it was only you need to move some a few interfaces, some data types and this was more or less simple and trivial. But if we need to move much more stuff there. How to do that without let's say, I said before, creating confusion and making it, let's say, not clean. This is my main point. What I'm thinking to do is to try to reduce the scope, following some of the approach that I've been shared on the scope channel and share a proof of concept something that we can further review and see if there are limits or not in that, that kind of approach. Then I let other, let's say, report anything that they feel is relevant for the discussion. So, can we like just summarize what we think the candidates are for a path forward for it sounded to me like we discussed runtime, runtime discoverability via dagger, and also, like a first candidate for, or maybe not a first candidate for modularization. And Justin, did you want to use the cast Merkle trial loader as the as the example for the second thing. For the second, yes for the second problem. Yes. I do for the runtime discriminator on choosing that graph. I don't think that's appropriate. And I'm not quite sure what would be yet. So we'll probably need to keep thinking about that. Like what, what configuration option that a user can pass in that startup could choose a different implementation of an interface. I think that's the question that we need to ask and sort the answers by smallest. You know, narrow, however you want to think about it right. Yeah currently one of those is not even a runtime option, as much as it is a directory that's scanned for additional jar files additional code code base. I mean we can I mean if this is a proof of concept we can always fake it. I would say like, oh transaction validator. Alright there's command line option now to use transaction validator foo. And then we demonstrate that like okay. They're all wired up correctly at compile time, but when you start providing that via the service locator pattern you get the one with the config configuration specified. We could contrive something I guess. I think something we should consider adding to the to the plugin context right now you can register one service with one interface. I think we need to have something where you can register multiple items like for the milestone stuff. I don't know if service is the right word, maybe multi service or component where you can register via class that would say this represents a milestone definition, and then you would just add to it. So that way you could make all the etc forks a plug in, you can make your customized enterprise forks a plug in if you want to have a fork that has that custom transaction processor. That's where it would come in. So, I agree that the protocol schedule is a great opportunity for this. I think you looked at this recently and it turned into it turned into a very broad graph to resolve. Because there are so many things so you know. Yeah, this is exactly my point out to organize the plugin API around this. If we can narrow that down I think that that's, you know, great. So, talking about transactional editor. The point here is that, for example, you need to have a way to define a protocol schedule for that transaction validator in the in a plugin so because you can't simply say at that time use this transaction instead of because the the transaction validator is linked to a milestone in a protocol schedule so as this kind of life cycle. So, unfortunately, let's say that this kind of things requires some kind of protocol schedule implementation as a service to be be used also in a simple concept. Now, on the other hand, even just the active designing that would give you a really good menu of things that that you need to implement right. Yeah, yeah, exactly. For this, for example, the first dependency is that you need to to be able to understand to query at least the the the Genesis file option into the plugin. Because the marathon are defined there so this is a clear dependencies. This I say before, I mean the JSON file, the Genesis file could be a little project to start so should require less of a full blown protocol schedule. Yes, Diego. Yeah, I have a mother, you know, open question here. To be able to, I don't know override any dagger module through a runtime, because if that's the case, I'm seeing that like a bit risky. I don't know you might download. I don't know this distribution out of a sudden you get something into some directory that is load up in runtime, and it may break things or thinking mindlessly to replace the whole behavior of something important. Yeah class path injection things like that yeah. Yeah. Mm hmm. I think I think yeah I think we would be very intentional and very careful about things that we expose like that. Yeah, cool. Yeah, it's a very kind of time yeah. Some, I'm trying to think what the first approach would be Justin based on all these things we've just said. I think Fabio and Justin both have a candidate a modularization candidate that they're looking at. We could, if we still have this notion of there's there's two classes of action items following this in terms of development efforts. I would. Yeah, I would call this a decoupling exercise. Yeah, try and convey that this is like a very small foundational thing that we're eventually going to build modularization, plugability, remote API is all those other things that we want on top of that so I would just to you know marketing point there just to kind of. It's a refactoring exercise. Yeah, yeah, exactly. So, so, so, sorry, go ahead, Fabio. Yes, I would like also to use this starting task starting activities to talk about how to organize the plug in API and how to move things there in a way that is clean and is future proof action, just that. Gotcha. Okay. So it seems like we will nominate some folks in the consensus engineering team to work on some of these as basically an initial pass and then we'll share the results and design of anything that we produce in another follow on session within a couple of weeks. And then we can go through it as a as a group to discuss what was changed lessons learn outcomes. You know where like there's probably will be a very specific PR. And then I think that at that time it can be a good point to discuss like how we want to continue down that route. From what I'm gathering Justin we're building and we're starting with a decoupling exercise and then we have decoupled groupings of modules, we can decide next on how to organize and continue the work against a modularity goal at that point correct. Yeah, I think I want to just, you know, take a second to phrase it a little bit differently again like I want like a lot of people. One of the major goals that we have here is to decouple consensus specifically right like that's, that's what cost there are customers out there that want to be able to do that they want to be able to run private networks and they want to, you know, they want to be able to iterate on that apart from what we're doing from main net so I think like. Yes, I agree with like the way that you outlined that I want to make sure that those people are contributing and making their voices heard as well so like. I'm kind of just writing documents in the wiki and having discussions in discord in the modularity channel specifically, and I'm hoping that you know people that, you know, care about that consensus decoupling part of it, are throwing out ideas and we can start using those as some of the building blocks, if we can find small enough once right, like it's like oh yeah you should totally do the entire consensus back and it's like okay well I can't write a PR that anybody would like, you know, approve to do that. So, just try to be inclusive just try to be inclusive of the, the major use case that we have which is decoupling out the consensus. To add to that I'd say, I don't think we want to block on our results on, you know, the consensus engineer results, there's this parallel paths that could be taken. Like, I think that the the refactor of proof of work into a driver of the execution engine. That's an entirely parallel track that could inform and make progress on this goal, at the same time without having to block on results and will. I think it'll allow the conversation to be, you know, a decentralized conversation like there's more contributors rather than just we're waiting on consensus engineering to tell you how you're going to do it. Perfect. Yeah, that wasn't my intention but I see how that's read. That's good. Yeah, and like I said we have a menu of options here and a bunch of different tools. So, perhaps we will schedule a follow up actual get together in maybe three or four weeks. And as Justin said in the meantime use the discord to go through this topic and potentially even share code snippets PRs branches that folks can just kind of look through and see in the short term. Anybody who has an idea to to contribute in a separate separate area or another, you know, different subject area and in this in the modularity discussions, just pop it on the hyper ledger modularity channel and run with it I think. Yeah, we anything else interfaces. Okay, I will gather some notes from this and share them in the wiki and on discord. Thank you for those but yeah I think this is a great approach. Also check out your calendar for. Like I said probably three or four weeks just so we have enough time to make progress without. You know with summer holidays in August and all the normal August stuff. Cool. Alright thanks everyone for the hour today. It's good to see everybody on this call. Thank you. Thank you. Yeah thanks everyone.