 Alright, welcome to the August 8. Is it really August 8 areas cloud agent Python user group meeting. Talk about the hyper ledger and on credits rust project PRs and other issues I've got a few now nightly builds is one of them but a few to talk about. So that's, and then open discussion. I'm sure that this is a Linux foundation hyper ledger meeting so the anti trust policy is in effect which is on your screen, as well the hyper ledger code of conduct is in effect let's be good to one another and again there's a link to that on the screen if you haven't looked at it lately. Welcome any newcomers to the meeting. Anyone wants to introduce themselves that's new and wants to talk about what they're doing or, or say what their interest is in acopi. Feel free to grab the mic now, and as well anyone who wants to raise any issues that they want discussed at today's meeting. All right. We have announcements right now acopi documentation is out and available. We'll probably do some updates one of these days soon on getting the documentation aligned between the, what's in the repo and what's on the acopi.org site but definitely recommend using acopi.org as your source of documentation it basically has all of the markdown files much easier, much easier way to navigate around full search, much better than what's in GitHub for that so definitely if you're looking for documentation that's the place to go. Okay, the hyper ledger and on credits rust project we've got that pretty organized right now I wanted to go through basically how we see it rolling out. Right now, and this is all Daniel bloom and Jason Sherman's work, and they're doing the organizing on it so I'm just the speaker here they can jump in at any time to point out what I'm saying wrong. Big activity right now and there was a PR put in yesterday that I'll move forward which is getting getting the integration test running on the non credits RS branch, including the new, the major new test for the non creds implementation and on credits rust. So this is implementing ledger agnostic and on credits in acopi and using transitioning from the and on credits credits implementation into the non credits RS implementation. In doing that we're going to eliminate or adjust some of the endpoints. The original thought was to remove the old ones in place of the new slash and on credits endpoint we've now decided to keep the easy ones to support. So anything where basically, we would call something the inputs and outputs are more or less the same we might do some tweaks to the inputs and outputs, but then just replace the processing in in this with the processing in the and on credits path through. So, these will be converted. They will still exist to a large extent, but they'll just invoke the common common code from the and on credits endpoint. We are adding slash and on credits as an endpoint. Where you will see a bunch of changes are in revocation handling so just to again underline I think we've talked about this before but just to underline in existing and on credits, we support both acopi doing all the work for the controller or the controller managing all of the revocation registries and the like for themselves. We are going to drop that ladder part. As far as we know nobody actually did that and it's a really difficult thing almost impossible for a controller to do. So we're going to drop that and have basically acopi handle all revocation registries. So, the controllers, the controller will just sort of say hey I want to use revocation. So when creating a cred def. When credential is to be revoked they'll say revoke credential X. When it wants to publish a set of revocations it says publish revocations for this credential definition, and all of the processing to do with those actions are done by an on credits. The revocation registry runs out of credentials, a new one gets created and so on and so forth all of the revocation registry handling will be outside. Well, we get some nice updates in the new and the latest updates to an on credits, or to CL signatures that Andrew White has been working on things like the tails file is no longer needed by the, by the issuer once the is being created it's no longer needed. So things like deleting the tails, the local copy of the tails file will basically be done automatically after the successful publishing of the tails file it's no longer needed by the issuer. So it can be deleted locally that's actually was something that was pointed out a long time ago we were winding up with lots of tails files and things like that, and basically just not needed. So removing things like creating a revocation registry definition and creating a revocation registry entry those will be removed. And so we'll, we'll be able to, you know, just eliminate those for being used so there's only a few that are actually removed. So we're going to take patch revocation registry and set state. We're using those and fix revocation entry state. I'm not sure those are needed anymore the controller might need to continue to do those so those may continue to exist. So those are when the local state of a revocation registry gets out of sync with the ledger state, which happens once in a while we discovered that the painful way. These allow a controller to repair that situation so if things like publishing fails. Now, that is mostly handled automatically within a copine now. So I'm not sure this these controller based calls are actually needed, or whether it just is all handled automatically so we'll be looking at those that's why I put these as will investigate. So that's where we're going with that. Jason, I believe you are, you know, getting these tests running you're basically adopting the old endpoints. Is that correct. Yeah, so the PR yesterday was simply to get it to a point where it could go through to get have actions without failing so that included at this point commenting out a bunch of tests that were broken until we replace the background logic for schemas and cred deaths and stuff like that. So next step is going to actually put the new BDD test in based on the scripts that Daniel bloom. Right. So that'll be next and then I just kind of want to get that PR merged into the branch so we can, we can start pulling in more of the stuff from Maine. We're not a non creds RS branch is quite far behind what's been I mean there's been a lot of work in the last few weeks so just want to get those things so that it could pass through all the gates in GitHub, and then we can start moving in some code. So we're a few days away, I guess is the easiest way to say but once that initial test is in that we can start backfilling and re enabling tests so good. Okay, that re enabling will involve reactivating the endpoints or adjusted test as needed. Hopefully, hopefully we won't have to change the logic in the test it'll be the getting the endpoint to work it'll be as it did before but with all the new and on creds back and stuff so that's the hope. Excellent. Okay, good. And basically, that is to reduce the pain of migrating to this new version it will be a breaking change we're pretty confident of that. And if only because we're removing that controller. Handling of revocation registries just because we don't want to reimplement that logic in the new code. But I don't think anyone has ever done that so I think we'll be fine with that. But we'll try to reduce the other, you know, typical things that people do creating schemas creating credit gaps those should still work and be pretty straightforward to support. I categorize these this morning went through and sort of looked at the groupings of things. There's the category of things left to do I passed this one on to Andrew Whitehead to take a look at and try to get a something in place for that that works for all of the clients of the and on credit library so that that can be for the yeah and on credit library. So this is the next big thing that I think will will probably need to be done which is updates to revocation. These got skipped over, because of complexity of verification that I think the decision to turn off the controller access to revocation registries themselves should make that simpler at least I hope. So, these are all related to revocations next, and this is an ordering I'm not sure of these are basically data model changes and updates to where things are. This is, I'm not quite sure whether that should be the next thing done, or, or whether we jump into revocation. Daniel, did you have a thought on that it seems to me this would be the next thing we would do after these two things are needed, sort of, in parallel is that, you know, could be either sorry I was only partially following along to that. So which two components sorry are you saying would come next. Either revocation can be worked on after what Jason's working on now or these two data model ones. Yes. Yeah, I would say revocations probably the next one in line I would say the data model ones. That's more like a, it would be nice if things were in the quote unquote correct locations, but aren't severely negatively impacting anything. So, okay. Okay. So those might go down actually down to here. I assume they would improve things for the other registry methods. Yeah, if, if I think it would be more like, it would be cleaner if the indie stuff was in indie locations and then the did web stuff would be in the newly created did web locations or, or, you know, along those lines. But yeah. Okay, so that's that grouping. So after revocation would be the endorser updates. There's a few things there and endorser. And then, and again, this order could go up, up as well. The V one issue credential and present proof to use the non credits interface again we do want to continue to support that with this new code so we we do want to update these these should be once you know, removing adapting the old endpoints these become much easier to implement. And so those sort of move there as far as I can see. Finally, we get into the did web interface and then the last step will be all of the upgrades. When we put this out, we can upgrade the storage as an automated step. Basically there we would use the upgrade capability so that when you deploy. You know you have an existing implementation and you deploy it. And then it will trigger so that all of the entries get appropriately updated. I put this at the very bottom and this is probably not something that we want to do as part of an on credits I think I think this is a separate issue. Daniel any particular reason to have the non credits flag on this one. It's something that we're handling it basically the same way that it's currently being handled in the new code added for a non credits. So it applies to some changes that were made in the non credits but it's not strictly a non credits related I suppose. Yeah. Okay. My tendency would be to take that off of this but we'll see how it goes. What is the project. Anyone able willing and able to help with any of these. Please let us know and and we'll coordinate. You know assigning and grouping these tasks. But that's where we're going with this project, and we'll see how it is a chunk of work, as you can see the biggest, the biggest set of tasks in this long list. I, you know, finally went through all of them and got a good handle on on what the grouping was and, and I'm sure all of these sort of this will be the big. The big thing is getting revocation dealt with verification really is as at least as complicated as verifiable credentials themselves so that's the work going on there any questions or comments from anyone. Excellent. Okay. Next step I wanted to go through a few PRs. We do have some new PRs to come in so we wanted to talk about those and talk about what's coming this is an on credits BDD test preparation that's what Jason Sherman, who is using technology reported. So, that's in progress. Problem reports. Daniel, where are we on this one it looks like ready to go. Yep, that just needs some review and then it should be good. Hey, good. Um, this one I think we are still where we were. After last call, we did get a, I posted the. Whoa, lots on this one. I posted the thoughts on the maintainer from our last maintainer meeting. And Moritz said he would take a look at that coming up so that'll be something to expect soon. I suggest maybe seeing how much of the performance is simply in the code that is the test code versus the non test code so hopefully there's a way to test that but we'll see. Syro has been working. He's not on this call but Syro has been working on the converting from unqualified dids to did peer two and three. He's made excellent progress on that close to wrapping it up. He and I are going to get together later today to go over the last steps of making sure everything is in place and figure out what's missing. Um, but basically this will bring us up to spec if you will on on the did on the did spec and and the did peer two and three and enable the upgrading of unqualified deeds to be qualified dids and and be used with efficiency so that we're not always sending the massive did peer to but we're using the did peer three. Daniel does this report go away with this 2394. Yeah I don't think it's likely that will merge those changes. Some of the discussion there is useful but the discussion can exist with a closed PR still so I think we're probably going to call that one. I think verification key this one. We're in review process latest comments. Yeah so I will update the branch should I bother. Take a look at this, Daniel and Jason do you mind if I assign to these to you to. Yeah, it's, I've been intending to look at this in greater detail but just haven't gotten around to it. And this one Daniel. Again this is ready to go and just needs a review. Yes. Anyone want to volunteer for this one. Okay. All right. And then these last three I'm going to take another look over the next little while to see what we can do about this, my guess is this one can be looked at as part of the revocation. You're on the call right. I am. You looked at this one a long time back this, this had to do with, as I recall open shift and, and so on. There was never, I think any great agreement on whether this should be done. Should we just close this. 1837 I'll have another look at it. Have another look at it. I'll sign it to you and. Obviously the person's been away for a long time I don't think he's been updating it so read through the arguments and make a call on whether we should try to keep it moving. Sure. And this one I don't know what to do with this one again. Daniel bloom I think you are the best one to take a quick glance at this one and decide if we just close it. I think what I got was to close it. Okay, I can take a look that hasn't been on my radar so I mean, thought about it, but I can take a look. All I want you know don't don't spend a lot of time on it. Other just either decide to close it or let's or encourage the person to update it because he's got conflicts and so on. But if we if we need to close it obviously it's it's quite old. Yeah, everything other than our active, active work. So I had a quick comment about a recent PR that I put in and that has been merged it was on additional tweaks for did web support I think is what the. Yeah, 2392 there on the list. Yeah. So I, it came to my attention that those changes, I thought they were going a little bit further than they actually work in terms of of adding support. So I've got some things that I'm following up on. And as I've been digging into that, I've, I'm finding that I might need to mess about in the did doc implementation. And I've been looking at Jason side attacks changes for the period and stuff. And I might have some recommendations there in terms of perhaps a slightly different approach to support did web compatibility as well. Yeah, I'm like very actively in the middle of doing that at this point so I don't have anything specific yet. But I just wanted to call that out and I'll be vocal about differences of opinion if there are any. Excellent. Good. The marshmallow warnings was fixed yesterday this was a massive change from number of files 162 files. But no change in functionality, other than we get rid of all of our, almost all of our marshmallow warning so that's good. And there's a path to fixing, I think the rest of them that he was working on. This was just a dependency update. But in the issues, a recommendation has gone to replace these with pie test rough. So, any, I have no idea about this one any comments from Daniel I know you commented, we want to encourage this. I think it's very possible that this is just the latest tool and a long series of bad tools for for Python and stuff, but I was impressed by what I saw. I went and looked at rough, and it seems to be pretty widely used. Some of the projects that have been around for a long time in the Python ecosystem are adopting it. And then on the flip side on the pie test, like eight and with the deprecation and no longer being maintained over there. And then just generally with flake eight, I've been frustrated with flake eight in the past. In terms of like, ideologically with the maintainers. There's been some differences of opinion there so that's been fun. So I've been in favor of looking for alternatives to flake eight for a little while now. I wasn't aware of rough but I liked what I saw. So it's, I'm interested. The only thing that was a little strange to me was the version number on it being something that's not. I don't even know how to interpret having 282 releases without having a minor major number in there but yeah. Well, I mean conceivably that's nightly builds and they were building something that's supposed to be compatible with long existing stuff who knows. Okay. That one's coming. This is that last piece I believe that was going to be looked at. We've got the set of an on credits ones. Oh nightly builds that was the one that I wanted to look at and I had that linked in. We're whoops in here. So 2250. Jason put this in quite a while ago. He was now thinking this is a higher priority than we thought so chances are we're going to get this done much sooner than later. The next few days, we hope so that we generate basically a nightly build that build can be used for instance in a th which will make it go a lot faster and still be relatively up to date. We can use this in in other places where we've got downstream so basically a dev. A dev build, including all of the artifacts, including a container container image gets created out of this that's the idea. Any comments on that. No, you basically summarize it very well. I can just say that the re some of the reasons that we are looking at accelerating these is we have we're developing a few new features namely about the multi ledger support in traction and having the nightly builds would help a lot like moving that forward rather than having to have custom images built. So a quick follow up question to that so you images. I think there's a really obvious way for us to do that since we have to get a container registry and that, you know, goes pretty smoothly I would say. Are we planning on also publishing nightly builds to like high five for the actual Python package itself, or what were we thinking there, out of curiosity that that is a good question. I'm not sure what the best way forward would be, because while the image, I see it as a way of like testing things quicker, if it isn't unsupported, or an official release might be just a bit of overhead, publishing to Pi Pi but it would keep it consistent so I don't know, I don't have strong opinions. Wait, any thoughts on that. Sorry, can we just review real quick. So the idea is, this is about nightly builds, and that Emiliano clearly wants is the is publishing a container image to a container image yeah I think we have an issue for that actually already for a nightly container image. We have that and we're planning on implementing it the question would be, do we want to publish to Pi Pi at the same time, or should we just go with an image. I'm not a huge fan of nightly packages. I don't question is would they would they would it be useful like would. Because everybody else think like Daniel would would you use a nightly package ever. Yeah, you. Sorry. So I'm aware of at least a couple of different instances where rather than building directly from images there's been some requirements where people have built their own images and then pull in back by from Pi Pi. And kind of the default right now if there's a feature or change that I need that's not released yet. People have resorted to referencing the GitHub repo at main which works. And probably is not too different from pulling from a nightly package from Pi Pi I guess in terms of like overhead and changing like configuration for builds and stuff. But it, I don't know, I have mixed feelings. I don't, I could see it probably going either way. I don't have too strong of opinions on that, I guess. I mean it would be simply simple enough to do the only my only concern is like the extra craft that ends up in Pi Pi after that. Yeah, I was, I was reading online actually about that recently and I think there's a maximum number of published images permitted like per package or something like that. So we would, I think we would have to go back and clean up old nightly builds for sure. Which, yeah, again, might not be too much fun. Well, let's let's create a ticket for it and then we can discuss it further and see, you know, what we need to do. Okay, so for now we'll just go with the container image. And we'll leave it at that for now. And then we, we decide from there if anyone else wants it. Yeah, one step forward. Yeah. Okay, so here these are all of the things that have gone in since we last met update for arm issues and work around. We've got a guy who's walking through all of our repos to get those done. Jason's preserving exchange records. This is the big breaking change that unless you add this presentation exchange records will be removed once the interaction is complete from a pie so the troller needs to take care of whatever it needs to take care of. A few other other changes the big one. Oh yeah the other one that I wanted to highlight was was this one the chanjots completed which is in multi tenant, we can now have supportable right ledgers. So, so a multi tenant in instance can have one tenant writing to one ledger and another tenant writing to a different ledger so all of them have to be supported by the, by the overall implementation but each tenant can select their own so that's all positive. So those are the latest updates that have gone in. I think those are the topics that I wanted to go over. So we're ending we've got lots of time available I didn't have any other topics for this is there any other discussion. And as I say that I think of one that I wanted to bring up. So we have anyone on that's been also playing with the AFJ library but is it time to think about adding open ID for VC support to to occupy. I know they've been putting some of it into AFJ and we could sort of copy what they're doing. Is there interest in any groups in having that functionality available. I guess, my personal opinion is, it would maybe expand the feature set for occupy. I don't know of anybody who's actively using it personally but I don't think it would, there would be any harm in saying like hey occupy supports also open ID for VC I think it would just make a much, much more rounded out product as well right. I'd be in favor of, of including it. So we've actually done some investigation into adding this stack by, or adding support in some way to occupy. We've actually been wondering if it would make sense for this to be something that's deployed as a companion to occupy as opposed to something that we implemented directly within, at least when it came to the server side support for the open ID for VCI protocol. I think that would make sense to be something that's directly added. But because the open ID for VCI, it adds a set of endpoints, which need to be publicly accessible. You know, occupy doesn't really have a good way of having extensible endpoints beyond the admin API endpoints and those aren't things that would be generally publicly accessible endpoints. Okay, so it, it seems to us at least that one way we could do it is you have a companion service that can call into occupy for like preparing a JWT credential or Jason I'll be credential or whatever. And then it just gives you the payload and then you can pass that back as as this remote service over the open ID for VCI protocol. Like a plugin. More or less, but I think it probably wouldn't be as close to occupy as a plugin would be in my opinion because it would interact over. I don't know. I don't know. I could see it going anyway. Yeah, it's funny you say that though but that's that's basically the architecture of VC off and right. Right. Yeah, exactly. You're doing the same thing where VC off and uses what's needed from occupied but handles, you know, all of the stuff on its own. Yeah. I, I like the idea of being able to maintain that kind of functionality separate from occupy or occupy it's its main stick I guess is did come protocols and it also has like a secure storage element and verifiable credential and stuff, but like the bulk of the code within occupy itself is did come. So keeping that focus on did come within the occupy code base and then if there's additional protocols that we want to implement having those be a companion to occupy, whether or a separate service entirely that enables us to, you know, adapt and develop with these other protocols in a more organic way I think so. I also should have prefaced my comment by saying I'm not fully aware what the technical implications are so I appreciate the sort of discussion on it, I just, just from a sort of outside perspective it'd be kind of cool to see more features and you know, it was always nice to see more features. Yeah. So if we if we did it as a separate service. Then I think there is going to be an increased amount of like credential preparation and points available within acupyze admin API. Because we would want to expose an endpoint for us to be able to, you know, prepare Jason all the credential without all the other additional steps that are involved in doing that over the issue credential did come protocol. We do have some existing endpoints that kind of go in that direction we recently merged in the JWT sign and verify endpoints. So if you're willing to manage all the verifiable credential creation outside of acupyze you have that endpoint available at least. It's, it's a possibility I guess to doing JWT VC but not a really batteries included way of doing it. And we have a similar endpoint for the Jason LD documents as well we've got a sign endpoint and a verify endpoint there. So if we had a remote service, we would need to flesh out the capabilities of those types of endpoints. Sorry, just a second. Okay. Sorry, sorry, I just had a distraction come in. So the other side of that is, if we did it as a plugin, we wouldn't have to expose like some of the credential preparation stuff as admin API endpoints we could use the existing code within acupyze. I'm not sure which is the better approach of those two but those are things that we've been considering at least I don't know that we can say that we've gotten too deep on any of those yeah. Excellent. Appreciate that. Thank you. Thank you both for comments anyone else on that topic. And otherwise any other topics people want to talk about. So there's one thing that I was hoping to bring up on the maintainers meeting last week but we ran out of time. And that was code coverage reports for for acupyze we used to have some service that was running I don't remember what it was called even at this point. But we had on each PR we had a report of whether coverage increased or decreased significantly. Which was handy for, you know, giving us a good view of like, we've added stuff that we haven't adequately covered with our tests yet and so there's more to more work to do kind of a thing. I would be interested in getting code coverage reports in some form available again to accompany PRs. Um, anyone recall what we were using. I can visualize the logo it was an umbrella. But I don't remember what the service was called. Yeah. Wave. That is code code code. Yeah, the umbrella is code code. So if you remember those reports, we haven't seen them for a while. I think, I think we ended up moving away from that because of the hyper ledger account associated to it. But what we surely would have replaced it with something we did I thought. When the test run in the actions. It will emit a code coverage report for the whole code base but it's just, it's not a very consumable and it doesn't like, it doesn't show up as a comment on the PRs itself. So you'd have to go in and look at the test run. And you just get a whole dump of code coverage information. So it's hard to evaluate if it's increasing or decreasing as a result of the changes or not. There'd be a PR associated with the change. Yeah. I was thinking of five just finding looking at quick ways to see, where do we see any sort of coverage, even that though, I don't know. So it would be on the PR. Yeah, on the PR test section instead of the integration test one. I think you would see it on the PR page. I didn't see it on the PR page when I looked. Yeah, then if you expand the tests, although at the bottom, there is a coverage report. That's a long way down. And if I can accurately use a mouse, it would be easier. Has reports coverage that XML. Is it just a matter of adding some additional stuff to expose the report. So there is a coverage that XML. Where does that go. Anyone know how to get to that anyway okay this is something to look at it for sure. Yeah, I think we'd have to explicitly upload the artifact from the action otherwise it's probably not preserved. Okay, so, yeah, finding out why and what to do about that one. Okay, I'll add that to the tickets. So it is being collected is just disappearing into the ether. Okay. I have one more topic since we've got some space. I see pretty has this hand up as well I can I can defer. I'm happy to bring it up after. Go ahead pretty. Yeah, so I have this requirement. So, for integration of SSI with some identity and access management tools, the traditional ones like October. So, is it has it been done already, or is there any resources available for this as anyone here, Daniel, or we looked into this before. So, I assume oak to integration would be similar to key cloak integration is that accurate similar to that. Yes, I guess so. Yeah, so the VC off and have you looked at that at all. I'm looking into this so this OIDC for VC and VC are then they are similar right. Yes. Yeah, so this VC off and basically what is it. It's a controller a set, you know a separate component from acopi that that uses an acopi instance, and basically allows you to specify a presentation request, and then use it in an authentication interaction so on the on the wallet side on the holder side they get a QR code and they interact with their wallet on the enterprise side, a relying party that knows how to use, you know key cloak, or or some other OIDC, you know, basically an OIDC relying on an OIDC is able to just call this, and it knows nothing about did com or SSI or anything it just knows about OIDC. It gets back a jot that contains the the attributes that were included in the attribute in the presentation request. There is a 2.0 branch in this that you definitely want to look at you don't want to use the 1.0 because it was based on a now no longer open source component, but there is a point oh that will be coming out soon. All in the chat. Yeah, in the chat. So take a look at that and yeah feel free to ask questions about that. And that you've actually got several people on this call that are doing active development in that repo so that we get to the 2.0 branch completion. Sounds good thank you. Awesome. Daniel back to you. Yes, let me remind myself. So, a point of discussion I suppose is, when do we stop running tests on indie wallets. Yeah. Yeah, for instance the the we run a PR test for indie specific. Yeah, build and then we have one without. And then our BDD tests the integration tests are being built with indie included within the image at least I'm not sure if the indie wallet type is actually getting used or if the ask our wallet type is being used on those. I assume ask her, but I haven't looked deeply enough to know for sure. But if we did decide to stop running the BDD tests on indie period. We could actually significantly improve the action runtime on that because I asked to go through an image build and everything. So, yeah. I think so. So I would be happy to see that removed people should not be using the indie SDK bottom line. It's too old and not not being maintained. And people have to migrate off it so we have announced is deprecated. We're still. I would be more than happy in the 090 we said it was deprecated I would be more than happy to stop testing on it. So as a kind of just a introspection question here, if there was something that needed to be fixed on indie SDK what is what are the chances that we would actually address it and like put out a bug fix release for the indie SDK. Considering the whole entire pipe build and test pipeline for the indie SDK has been shut down, probably very rare, like it would have to be something pretty serious and even then it would probably be faster to people to migrate to areas. Okay. So, even if the change was scoped to occupies usage of existing indie SDK builds with that. So, that's something we, you know, when you ask that that's that's the main question. You know, we could, if we wanted to do extra work we could switch the indie test to be nightly tests as opposed to. We could integrate them off and run them nightly versus. At all, but definitely change the PR ones to, to be just ask our. But any, you know, from an occupy perspective any sort of fixes that that we would do I mean, again, open source project we can do anyone can do anything our, our move would always be to correct by switching. So, you know, it's just there there's no there's no working test and deployment pipeline for the indie SDK at this this point in time even the existing one that was being hosted by the sovereign foundation has not been run in. Oh geez, maybe like a year. So, even if we were to spin that back up what it chances are it's not going to function and more require some Karen feeding for it to get it to work again. The GitHub pipeline the GitHub workflows on it were never completed so they only touch bits and pieces of it. And I think the point Daniel was making was what if the bug was in Akapai. Yeah. So, at most I would want to have, you know, you know, maybe the thing to do is to step back to have it. And I assume that's possible as we, we basically duplicate the PR action to run nightly, and only run the, you know, remove the indie, or comment at the end of the test in one copy and and copy out everything but the indie tests and the other. But that that might be a deprecation type strategy. Is that do you think that's a good idea or we just drop them entirely. I think if I think if they're ended up being like, like, Daniel you're talking like, you know, if we run into something in the future or you're talking something that is, we're running into someone's running into imminently. It's it's hypothetical I don't have any specific instances of hypothetical I think the best way to deal with that is continue with the deprecation get rid of it. I think that we need to fix something that happens to be critical that is necessary for, you know, the fixer inside Akapai for the indie SDK, then we would like branch and tag at that point and release a patch for whatever thing was like, what are we at 090 that fixes fixes that issue but moving forward we wouldn't support it so it would basically be a one off patch fix. And in the process of it being a one off patch fix we can do one off type things like, you know, invoking a workflow that run specifically indie tests right we don't necessarily need the the indie stuff being run nightly for that. Correct. I think that makes a lot of sense. Especially considering the level of support and ability for any sorts of updates to any SDK not well not that way we're not, we're not bringing any baggage with us, we only create it when we need it. Yeah. Technical debt was the word I was actually looking for. Okay. I wouldn't say done to the 090 tag I would say done based on the 090 tag. Yeah, that's what I meant. Yeah. Okay. What about this idea of do we split the tests and keep a separate run nightly do we think is is anyone willing to do that. I'm pretty separated Daniel did a really good job of that but I don't think I think once we. I guess it'll be like the 0.10 release which will eliminate the indie SDK so I would say we just dropped the nightly test for it. I don't think we, if we're planning on getting rid of it, I think we should, you know, remove the technical debt as we move forward. And like we said, if, if there is a critical issue that we need to fix then we can introduce that technical debt at that point in time. Okay. Okay. That way becomes, you know, very clear that we are no longer supporting the indie SDK. Okay, is it comment out or remove. Okay. Source controls our friend we can always bring something back. We can look at the revision history. And we, and you said there is a nightly run of indie tests so we keep that for now. I think we can eliminate that as I think if we're going to. I think eliminate as you know, technical debt as we move forward and if we need to reintroduce it to, you know, deal with a specific issue then we do it at that time. Any objections to that from anyone. Okay. Thanks for raising that one Daniel. I will put in the ticket for that. Excellent. Great. And we're right on time. Thanks all for attending. Have a great one. Thanks for the weeks. Maintainers meeting so those anyone interested we are having maintainers meetings on the off weeks. So if anyone wants that is on the calendar you're welcome to join. Otherwise it's the maintainers getting together for actually quite a similar discussion to we have what we had today we mostly had maintainer type issues on this one so see you next week. Maintainers. Take care all. Bye. Thanks. Bye.