 All right, welcome to the February 6th, 2024 Aries Cloud Asian Python user group meeting. Lots on the agenda. Again, we've got a pile of things in progress, one or two of which will probably come off the list this week. So that's good as they will complete. But we'll go over those, talk about ACAPI 1.0. Hopefully cover the state of ACAPI plugins. And there's at least one or two PRs that we want to go over as part of this. So that's the agenda. Reminder, this is a Linux Foundation Hyperledger Foundation meeting. So the Linux Foundation anti-trust policy is in effect, which is on the screen as is the Hyperledger Code of Conduct. Welcome all to this meeting. Glad to have you here. If there's anyone that's new and wants to introduce themselves, please do so. And if there's anyone that wants to make any announcements or request changes to the agenda additions, please do so. Jump up to the mic. Okay. The Aries annual report, the PR for that is posted here. I'll post it again in the chat. That will be discussed soon. I'm not sure which day it will be discussed, whether it will be this week or next. But it's likely to be at an hour before this on Thursdays, either this coming Thursday or the following most likely. So we'll see what happens. Second reminder is the decentralized at any interoperability webinar this Thursday at this time. So 48 hours from now will be the webinar. There's a link here that you can register for the webinar. And highly recommend that I think it's going to have lots of interesting topics to go over. So I put in the registration link in the chat as well. Did want to mention this as of yesterday, the Hyperledger mentorship program is accepting proposals. We had really good results last year in going with the mentorship program in Hyperledger and non-creds. Michael Otter and I mentored a student who did an excellent job at adding in all of the cryptography text into the non-creds back, which included going through it, executing it from the implementation, understanding it, and then putting it in. And he, in fact, a retro was the mentee. And at one point came to Mike and I and said, I think I've discovered a flaw in non-creds. It was actually a flaw that had been mentioned and rectified in recently, but he had actually gone through the paper enough to find and document what he thought was a pretty good flaw. So really successful for us. It does take time and you do have to have an idea of what you want to accomplish. But the time wasn't that bad. Basically, our Reacher really did a good job at figuring out we would meet every second week and then had a fairly active Discord channel. So highly recommended if an organization has the bandwidth, both because you're helping somebody and providing mentorship but also great results out of it. And you get some really good interactions with it. So can't recommend it enough? So with that, we'll move on to the agenda. Status updates. Don't see Jamie on the call. Ian, do you want to give a brief update on where Jamie's been on the non-creds RS work? Yeah, so the schema. So we're working on the endorser stuff right now. So the endorser functionality is being updated for the schema. Jamie's been working on that. Daniel's been mainly doing the reviews on that one. And the cred def and the rest of it is in progress right now with the endorser. Okay. Anoncreds in W3C format. On the Monday meeting of Anoncreds we got a demo from Credo. If the team from Animo that's working on the Credo update and they had a full demo into end of the Anoncreds in W3C format being issued, they had a really good presentation request coming in via def presentation exchange and then a presentation being presented. So really good stuff. Also gives the folks on the W, on the Acapai side the ability to use that for interoperability. Ian, you want to again mention how they're doing as far as what's cooking folks are doing for the implementation in Acapai? Yeah, it's going really well. So we had a status meeting yesterday. They opened up PR yesterday, actually, which I haven't had a chance to take a look at yet, but it looks like stuff is in progress and they're well on track to finish by the end of March. Excellent. Okay. Patrick. So you just mentioned the on track to finish by the end of March. So are we expecting more changes to like the core Anoncreds RS fundamental of this Anoncreds and W3C or that part is pretty much done and all we're doing now is integrating into the respective frameworks. So that part has been done. There's been tweaks done. So for example, we had three different crypto suite flags identifiers being used. We're down to one and detection of that. So that was completed last week. I think there's a PR in this week that Timo did. If I remember correctly. So there's little things happening, and I, whoops, that didn't work. I just wanted to check that PR that he's got in. Oh, no, it's already been merged. So it was merged yesterday. So those are the only one. Oh, no, this is the refactor. Okay. Maybe he closed his PR. I don't know. No, it would have shown up as closed anyway. The last one was, oh, no, sorry. This is it. Only one issuer ID for legacy. So a slight refactor here done to, to enable the W3C format, you know, the finalization of it. So basically as, as the animal team did their work, they found a couple of things they want to cleaned up. But other than that, we're done from a, from a non creds RS perspective. Okay. Interesting. Yeah. I'll put the, when I get a chance, I'll go find the link to this YouTube video. It's, it, there's a, but a six or seven minute demo of, of the exchange of the credentials and everything aligned with the, the test suite, the test vector that animal had put together. So that's good. Update on the did peer and AFJ interop. I know we've got ATH tests. There's been some tweaking. It caused a little bit of disruption in the ATH normal flow. I'm doing some final tests of some corrections that Sheldon's made. And so we should have a series of did peer tests in ATH. Daniel, any further updates you want to give on this? Let's see. So I've got a PR open. It's been open for a little while now a little more than a week with fixes necessary to get AFJ slash credo and ACPAI talking over out of band and did exchange using did peer for. So that's working. There's some follow up I need to do on that PR to get some test passing. And I did stuff that was a little bit more destructive than I intended. And in terms of backwards compatibility support. So I need to fix that up. But yeah, it's really close. I've just been distracted with a few other things. That's this one. Yes. Yeah. Okay. Excellent. And the one thing I didn't understand was their credo changes being made or or not? So that PR what I did is I did make a PR to credo to update their implementation to support multi key as a verification method type. But the PR as it stands does not require any changes to credo. Okay. Yeah. Yeah. A key point that came up in discussion in the comments that I'll repeat for those who are on the call that haven't been participating in those discussions is. So before AFJ slash credo 050 is released. There is actually not a version released version of that framework that supports the peer two or four. So even though ACPAI is gaining the ability to do interop with credo 050 it unfortunately does not get us interop with AFJ less than that version. I might end up picking up a did peer one implementation support inside of ACPAI in order to achieve that. I might not. And so we might just stick with it before and then just look forward to interop with the new versions as we go forward. So. Okay. All right. Yeah. We still have this issue with the back channel controller for did exchange and out of band and resetting AFJ in the way that ACPAI guys. So that's still got to be looked at. Not sure where that's going to go. Okay. Yeah. Just want to mention that I've started looking at the expanding the AFJ back channel for. Did exchange. Yeah. Yeah. The thing the thing that's going to be merged today to support that I never worked on the JavaScript back channel before and I didn't want to development environment for AFJ on my local machine. So what's coming for Aries agent test harness is a set of Dev development containers for every back channel and for Aries agent test harness. So anybody wants to get involved in the future, no matter whether it's a back channel development or test development will not need to know how to configure a development and debug environment for these things. So just load the Dev container and start up developing with the back channels or the tests themselves. So that should be done today. And then I'm going to continue with expanding that AFJ back channel. Excellent. Okay. That's good to hear. Thanks, Sheldon. Wade. No problem. I just wanted to add to what Sheldon was saying. So the way that he's done it is you can actually pick the Dev container when you start the project. So depending on what component you're developing, you can pick the associated Dev container with it. It might be a pattern that's cleaner to use and something like the acopi plugin repository. Rather than having Dev containers buried in each of the, in each of the sub directories or plugins. Yeah. Yeah. Okay. Sorry about that. Okay. Low generator testing. We've been doing low generator testing at BC Gov. Getting really good results. So we, Lucas O'Neill who I believe is on the call updated the created a new script that allowed for us to do low generator testing, not just directly to an acopi agent, but to a controller to a controller that is in itself controlling an acopi agent. So stepping back and this allows us to do end to end testing of the entire process. So not just the acopi part, but the actual business logic controller software for an issuer. Latest tests we ran yesterday, which probably makes us good enough as we're getting 300 and 310 credentials issued per minute in a sustained load with no errors happening. So and that's going to be good enough for what we need to do. So we're going to look at possibly doing some other testing, which might involve both issuing and verifying in the same test sequence, but 310 credentials per minute. So about five a second is good enough for what we needed to do. That does include a mediator business logic, an acopi instance, obviously an acopi database, and all configured in exactly the same way our production environment works. So it's a pre-prod environment we used. So I don't know if anyone has any questions on that, but we were super happy with that. And it meets the needs we have. I'll type this and if anyone wants to say anything, all right. Just one quick comment. It might be worth, I think we've talked about this, but all the changes are just kind of off to the side. We might want to see how we want to generalize it to put it into the CREDA framework, but I don't know if this is a really specific use case using the like URL controller style that items set up in the pre-made invitation or if it's something that would be useful for other cases as like a generalized case into the framework. I think there's a few things we could do that would make it more aligned. So a little easier, but I get what you're saying. There's definitely like the actual script is probably not needed. It would be interesting to see another plugin in the CREDA platform as an example. Okay. Okay. Well, we'll talk about it. Yeah. The changes are fairly minor. Lucas did a good job at nailing down. Here's what we needed to do to enable it. So work nicely. Very pleased with it. And then I've been running it on a couple of spare Unix boxes that I had hanging around and just executing those, which also worked nicely. We didn't have to get into the whole creating a cloud environment and so on because we definitely wanted to test our own production environment, which involves the BC data center and BC data center and so on. Okay. DRPC support is almost done. We're waiting on an approval that from Ian that acknowledges that he did some documentation and hopefully enough documentation, but otherwise, if you want to go ahead. We've made some documentation. Yeah. There is now. Yeah. If you check the PR, I've updated the description, but I've added, I've updated the read me. I've added a mermaid diagram or a couple to show like what the flows like. So please take a look and I will add more documentation. Maybe outlining the structure of the objects and things like that and maybe linking a type diagram, but at least for now that should cover how DRPC works, the endpoints that are available, what they do and how the flows should work between two agents. So. Excellent. But implementations done. I think it has sufficient test coverage. It's got integration tests. I've tested it with. Interaction, which is seems to be working. So it integrates into acupy quite well. I think. I haven't had any issues so far. And I'll be working on probably putting together. Just a small demonstration to show how it works between two controllers. And then I'll add that into the repo as well. Maybe as a follow up here. Awesome. And then from our perspective, if we turn that over, it will be added to the environment that BC government's called traction in the dev environment. So a plugin added to it and and that will be turned over to be used in an initial use case, which is as mentioned previously at attestation. OK, and the last one we want to talk about was one that Ian's been working on acupy issue 2000, which is a race condition. So it felt worthwhile to since it is. It's talked about in the context of issue credential 1.0, but what Ian found was it's a more general problem. And and and so Ian, you want to go over it and how it affects all protocols and then what he's proposing to do about it. Yeah, so the problem is that. In base record, which is the kind of the low level record that all of the wallet records use. When the record is saved, it emits an event which triggers a webhook. But this happens kind of during the process as the records are being saved and the commit, if it's a transactional database, then the commit happens later on. So what the people who opened this bug found was that when they were doing load testing, they were getting about 1% of their issue credentials failing. When they called the webhook, they were getting an error that the credential exchange was in the wrong state. So what we figured was happening was that the webhook was getting issued and then because they were running a load test, they were actually calling the webhook before the transaction was committed. So then when the webhook was called, it was getting the wrong version of the record from the database. It was getting the committed version as opposed to the version that triggered the webhook. So Stephen, if you want to just scroll down to the bottom, there's a PR that I've linked there. If you link to the PR and then go to the files changed set tab and then scroll down to the base record. So the line that's changed in the base record, as you can see, there's a session profile notify, which is what does the, emits the event that triggers the webhook. And I've changed that. I put a method in the session itself called emitEvent, which rather than emitting the event right away, it stores it in an internal queue. And then after the transaction is committed, then it emits all the events. So what this should do is emit the webhooks after the database transaction is committed. So anyway, I've updated the code. I've done all the testing. It doesn't break anything, but what I'm trying to do right now is just replicate the problem just to confirm that this actually fixes the problem. So that's the challenge I'm having right now. Anyway, so the PR is there. If anyone's interested in taking a look at it, but I'm going to spend a bit of time still trying to do some testing and just make sure that this is actually fixing a problem as opposed to just being a change. Yeah. To Daniel's comment, I was surprised actually, how minimal the changes were. One of the reasons I'm a little bit worried about it. And it's not, it's not affecting like every single notify, only the ones that are related to the, these record changes. So that's it. Hey, any, any other comments from those deep in the code? We are in an active transaction queue the event. Otherwise emit it. All right. Queuing scares me. Should I be scared about that? Patrick. Might be a dumb question, but does Acapai have a built in queuing system or does require like a Redis plugin to be set up for this to work? It's just an internal. Array. It's not, it's not really a queue. And I think in this case, I think a queue would be overkill because. Yeah. If the application crashes, then the transaction is going to roll back anyway. And so the, the commit, if the transaction gets committed, it'll admit all the events of the transaction gets rolled back. It'll discard them so that the events won't get admitted. The web books won't get called if the transaction rolls back. Okay. So I think, so I think this is all we need. I don't think we need anything more complicated. Okay. Okay, good. That was what came to mind for me. So thank you for. Alleviating my concerns. And I'll go back to. Writing documentation. Not there's anything wrong with that. Okay. Release 1.0. I've got a checklist issue. That I've created. I flagged PRs. And I flagged issues. For with the label 1.0. So. These are the flags and issues that are, are done. This one I, we will be able to mark as we're not going to do this. So what I asked Andrew to do. So this 587 in Christian envelope support is did con v2. Incretion envelope support. And Andrew did a brief assessment and said until we actually do did con v2 doing this makes no sense. This will come off the list as we're not going to do this yet. And we will do this. When. When we do did con v2. So this one will come off the list and I will update the supported RFCs to say we're not going to support this yet. We still support AIP to. This is the one we just talked about. This is Daniel's did exchange did peer related fixes. A couple of others partial revert of connection record schema change. This is the. Issue related to. Credo did exchange v1. Making the emission of different pit. Did peers tenant specific is a trivial change based on what I've. Evaluated and I expect to I'll probably just do this one in the next little while. Reorganize the documentation of the files. I am doing this. As we speak there is a PR that I'm preparing. So I've claimed this issue and I'm working on it and we'll have that. Probably by end of day have it completed. I got most of it done yesterday. A couple of small things that I thought were were important. Pertinent setting of preserve exchange records per sending requests through public did. This one should be easy enough to do again. As you know as this one is. This is the one this is the issue that Ian just talked about. So that's being completed. It's. Oh this is another tent level. Sorry pertinent setting. So again I put a few put the pertinent setting ones in there. That is the total amount of things I see for for. 1.0 any other issues Patrick. I just wanted to touch on the pile these sort of work we're working on. So what I would like to do is once they release a package just the pile diversion upgrade simple as that. It seems like they are on it and they will release a patch. So that's just a note I would want to include there. I had a PR open with like an alternative fix. Yeah. I don't really want to go down that route, but it seems like we'll be able to get a release on the file deep package. Sean. Yeah. I missed the last couple of occupant meetings. Apologies. The Linux hyperledger foundation staff would definitely want to support the 1.0 release and get the word out. Do you have timing? I'm sort of looking at this and saying, you know, can we do this by two weeks or the end of the month? Cool. All right. Let me talk to some folks internally. Yeah. I mean, based on that technical work, I think it's fairly low. This is probably the biggest one. And I probably didn't do enough preparation for that. For this part of the discussion, but that's what we do need to do is look at LTS and long-term support as to what we want to be able to say. Thanks. These are just quick PRs. Updated the, this one is now already done. This was the last thing on the not completed. And we're now saying not completed and not going to do. And that's okay. Update AIP 2, which I hope to do today as a PR to the areas RFC. And then this is where, what you're talking about, block post, press release and so on to get the word out that 1.0 has been accomplished. And it's long overdue. I will, is there, for the maintainers, is there anything else you see in here that needs to be done and anything other than that in the, you know, I think that covers almost all of the pull requests that are there. So the one thing that I, sorry, the one thing that stands out to me as being notably absent for 1.0 releases, the non creds updates, like the non creds RS and getting those integrated. Yeah. Which I'm not, I don't feel strongly one way or the other, I think right now, but I'm just curious what your thoughts are on that, I guess. Yeah. My thoughts are, I got to the point that said, it's just another thing along the path. In the, in the massive, you know, if we talked about going to 1.0 a year ago, we wouldn't have, it wouldn't have been the thing that would be on a required list. And that's where I started thinking about it, that it really doesn't need to be on a required list. It would be really good to have it done as soon as possible. I really want to get it done and out there so that we can, you know, deprecate the other things and, and move on to a non creds RS only. But I don't think it needs to hold up 1.0. That was my thought. That's fair. I think it's also, and I think you'd probably agree with this, but I think it's arguable that ACPA has already essentially been in a near 1.0 state for a long time. So it makes sense to just make it official. And then we can continue to move forward. And when the non creds RS stuff is ready, it'll, it'll get in at that point. Take this. Yeah. Part of, part of that was, was exactly that thinking of, we always seem to say, well, let's get this done first. And then that takes a long time. And we forget about actually changing the release to 1.0. And I think that's what I'm trying to avoid is let's, let's just set a reasonable target because it's, you know, it's long overdue. Okay. All right. With that, I'm not going to go into the long-term support. Because Jamie, are you good to talk about plugins? Yes. Excellent. I just, yeah, a quick update on the non creds endorsement stuff. I, I was just working through the, there was a few integration tests that weren't running with the non creds and they kind of do like, they do everything manually. So they don't set the author role and stuff. So those weren't actually getting, I wasn't testing stuff, like doing every transaction manually. So I think I'll be done today, but I'm not exactly sure if we want to support all that in a non creds and we're going to have to do, change the integration tests. But I think I'll get them supported. Now. And then we can decide whether to take that stuff out or not. Cause it does add a lot of complexity in. Anyway. Okay. Let's have a chat about that, Jamie. I'll. I do want to hear more and. Understand a little bit more of what you're saying there. So. Okay. Yeah. I'll just quickly go over some of the stuff with plugins. Yeah. So just to introduce this section. We now have a plugins repo. We talked about it a fair amount previously. And I just wanted to give Jamie a chance to show what is the state that the plugins repo is in. How, you know, and just an overview of it. So with that, I turned it over to Jamie. Yeah. So we have a few plugins now. I'm just going to go through them. So. So. So. So in the summer, super. Basic like. Basic message storage just saves. Saves messages between agents. In. In the wallet. Database. But there's more advanced ones like redis events and Kafka events. I can't speak a lot on this one. Right now, but I think this one's a little bit more in work in the test folder. All of them have unit testing. And then recently. A key did. A plugin. For the RPC stuff. Out of this repo. Using. Using the script. So there's this repo manager script and you can. A new developer can use it to. Create a new plugin. So. You just have a couple options now, but you can. Create a new plugin. Put the name here. And then it will. Install like all the scaffolding for integration tests. Your dev container, which we're really pushing for. It'll give you a base. Configuration file. So. It. Does. The same type of format as a. As a. Protocol and occupy. So a key was the first one to actually use this. There was a couple of hiccups, but it worked decently well. So after you do this, you can just. Open your. Dev. Dev container. And open this and you'll have a full. You'll have all your integration tests, all your scaffolding set. Up all your linting stuff set up. And then you just have to start adding like a routes file or. Whatever you want based on. What you're trying to do. So. Yeah, works. We want to start pushing. Plugins to be built this way. I think it will work pretty good. And then one of the other things. Going on is there's this. Globals file, which has the scaffolding in it. And then it has its own. It has its own. A configuration file here. So if you want to. Change stuff for every plugin. You can. Change any of these configurations. And it will merge them with. With the plugin. So if you. Want to. Like have different. Different libraries. You can add them here. And they won't get overwritten, but if you have a common dependency, then it will take it from. The globals file. So it's just a way that we want. We want to keep everything on the same. Versions and upgrading. We don't want plugins to get left. Behind. You can still update one at a time, but if you use the script, it will update all of them. You can use the same script and then you. Do update all plugins. And then this is merging them. And then it. Will delete each. Lock file and upgrade them. So this is a way that you can. Upgrade. All the plugins at once. And then when you push it, it will run all the integration and unit tests. And if something's broken, you can. Hopefully fix it easily, but. Yeah, so this is the way we're going to try to manage. This. As it keeps growing and keeps adding more and more plugins to it. That was basically all I wanted to show. Yeah, it was kind of cool to see somebody actually use it from scratch. Like a Keith did recently. I think it went pretty well. I agree. Patrick. Quick question. So. On the OIDC for VC plugin, from what I understand, there was like a new endpoint that you could set with new routes. Is that a complicated thing to do? I don't know if it's really here is the place we should talk about this, but when you talk about these routes, so these routes would be added to the admin interface. As I understand. So one, one that does that a whole bunch of these do that actually. So one is like basic message storage. So without any of the scaffolding, you'd be kind of in this state and this file would be empty. But then all you do is add the. Routes file and all these routes will. If you use the plug in, they get loaded into the admin interface. I can't speak on how acupy does that, but it loads all the routes from each plugin that uses. And then it stands where I would want. Like when you run this plugin with acupy to have a new endpoint that I said, you know, let's not want to have an endpoint that I want to be publicly accessible. And wouldn't want that to be on the admin interface. Is that fairly straightforward? I see what you're saying. I haven't actually done that. So I can't speak to how you do that right now. Okay. Thank you. Traction has public endpoints in the innkeeper plugin. Right. Like you make a reservation. But now that I'm talking that through, I think they're in the admin interface. And the only reason they're public is because they go through our proxy. So you'd still need the X API if you weren't doing that traction setup. Maybe we can add a decorator or something to indicate something. Well, no, because there's public endpoints in the admin acupy API, right? Like ready and live. So, yeah. I'm sure you could do something, but I'm not sure right now. Yeah. Let's put together a design doc or something. I just wanted to, yeah, comment that it was really easy to develop the plugin. I mean, mind you, I don't think the protocol I was working on was that difficult. Just in case, like Pat, if you're looking for like other examples, RPC also has examples of how to register like endpoints. It's not public, but at least it's a, it's now another example of how we're using the plugins to like integrate new protocols into acupy. So, yeah, I think it's a new example actually of how to implement a protocol as a plugin as opposed to it being a core part of acupy. So it's another example that we can include. And I think the acupy documentation has a section on creating a plugin and how to register routes and message handling in that. Read me. I think so. Okay. Thank you. Sorry, do you want to finish Jamie? No, I was just, I was just going to say that I like think this is really important going forward with acupy in general, because otherwise if we didn't have this and we wanted them to be supported, well, they'd either be scattered in other repos or all in the acupy repo and it would just be getting out of control. That's the main reason why we're trying to do this. Yeah, yeah, it's a great idea. I was just wondering quick about the in the like the PI project file for the at the like Aries cloud agent Python libraries. The version pinning is like greater than 010 less than one, right? Yeah, that's I think that's what they are. Yeah. I was just wondering, like, unless I'm mistaking how that version is working, that means like if I'm pulling in a plug-in, like we're doing attraction or anyone could be doing, and there's like acupy 012 oh pipe or PI libraries get published, right? Like your app could build with an updated acupy that could have breaking changes without knowing without like you having updated that yourself, right? Just by including the plug-in. Is that right? Like, like if Aries cloud agent 13 libraries came out and I just built once they went live, this would build those new ones, right? Maybe that's not a problem because the plug-in should be supporting that. The way that it's been defined, the dependency on Aries cloud agent is it's an optional dependency. So if you're pulling the plug-in into some other project, it's not going to impact the version of acupy that's installed. It won't even try to install it. The reason why we did is an extra dependency. So when you're developing the plug-in itself, if you install it with its dependencies, it will automatically install Aries acupy and basically what we've defined is we've defined it as being, you know, it'll install between the two versions that are specified there. So it's only if you install the optional dependencies that it will do that and then basically it'll just work like any other version. So if acupy is in installs, it will install the newest version up to version one. That is something that needs to be that we have a ticket for, well, kind of have a ticket for, but like currently you only know that these plug-ins are for sure working like on the versions of the lock file and it would be nice to be able to know like if I have an old version of acupy and I'm not using the optional that the plug-in is going to work and there's definitely some work to be done in that respect. I think that's what this one is and there's a bit of a conversation but it hasn't really gone anywhere. But yeah, anyway, that's a hard problem to solve. So if anybody has any ideas, there's this ticket here. Yeah, I guess that's kind of what my comment was related to, which is you mentioned the auto upgrading of all the lock files, the Pi project and then the lock files. If you get to a point where all of the plug-ins but one of them and not the one you are interested in or know about, you could in theory back off the changes in that particular plug-in and then merge the rest of them and leave that behind. Yeah, so right now, if you use that script, it won't work. Like it will upgrade all of them but you could take the changes for the plug-in that doesn't work like the integration test don't work or whatever and not commit that one but it's not exactly streamlined or anything. Yeah, you just would undo it but what that gets to is this whole I'm sure it's coming in with this tagging of this repo question which is that eventually some plug-ins will not get maintained by their maintainers and that's going to be the biggest risk and being able to know exactly the state. Like it might be nice to have a part of that manage script to say update on okay, here are the here is the status of all of the repos. Like how do they compare to the global plug-in? That might be a nice way to be able to automate the publishing of information to say here's where the various plug-ins are and here's what's needed. Yeah, I agree. It starts to fall behind. I'm just wondering if that manage script could be extended to do something like that. I think it could. But yeah. I think that's a good idea. All right, any other questions or comments for Jamie? This is awesome work, Jamie. Thank you. And thanks for presenting it. No problem. Yeah, I didn't know about this meeting. Sorry about that. That's okay. I assume there is documentation about all of the things you talked about, but I think, well, I'll stop there. Is there? There's some, yeah. In the read me file, but there's only a little bit. All right. Well, one thing I can do is I'll extract out your section of that first bit of intro and we'll put a link in. We'll put it on the Hyperledger YouTube channel and then add it to the read me. A key documentation seems to be what you're talking about today. But anything you can add that cleans up this based on your experience would be good. And anyone who does a new plugin that goes through this and finds things that were less clear, please update the documentation on this. This is a community, definitely a community repo, and we need the community contributing to it. It might be worthwhile to add a link to that occupied documentation about how to register routes and stuff like that. At least from here, if not copied over. Okay. Excellent. All right. Anything, any other topics people want to raise before we end it in the call? No hands up. All right. Have a great day, everyone. Thanks.