 All right, welcome to the September 19th, 2023, Aries Cloud Agent, Python user group meeting. Lots on the agenda today. We are recording the call as per usual and we'll post the results after, post the recording after along with the chat. Thanks to the Hyperledger folks for taking care of that, like Sean. This meeting is held by the Linux Foundation and the Hyperledger Foundation. So the antitrust policy of the Linux Foundation is in effect as is the Hyperledger code of conduct. Let's all be good to one another. For introductions and announcement, if anyone wants to introduce themselves new to the call, wants to talk about what they're doing in the community, let us know, talk now, grab the mic and jump in. All right. From an announcements perspective, the internet identity workshop is coming up October 10th to 12th. A number of people from BCGov will be there and a number of people I'm sure on this community call are gonna be there. So registration is open, of course, long since and hope to see you there. The Hyperledger member summit is in San Francisco and Tokyo this year. I'll be attending the Hyperledger member summit on the 23rd of October in San Francisco. As well, I guess the Linux Foundation member summit is the 24th through the 26th in Monterey, California. So a few hours south of San Francisco. So that is also on, I'll add that and a link there. So those are upcoming. I know right now in Europe, the Rebooting Web of Trust is on. That meeting is happening now and I've heard some things out of it. I've also heard some interesting things, announcements coming out of the Open Wallet Foundation. So you might wanna check out their press releases and so on that have come out. Any other announcements people have to share with the community? All right. On the agenda, we'll start with the release 10.0.10.2. We are pending some final tests done before we release it. So far we have no reason not to proceed from RC zero, which we've released to the final version of 0.10.2. Reminder that 0.10.2 is a patch release. So it does not include all of main, but rather just two targeted issues that came up in the release 0.10.1. And so we're trying to get that pushed out. There were side issues as a result of those issues in a couple of deployments, particularly in BCgov. And so that final testing is what we're waiting on. I don't know if anyone has any status. I don't know if Wade is on the call, but has any status from that testing. Daniel Bloom heads up that there was an issue raised last night with some additional testing in one of ours that looks to be a possible that did resolution issue, similar to the one that was found. So I don't know. I know it was on 10.2 RC zero. So it was the very latest code from a released perspective. So heads up on that issue. Cool, sounds good. Okay, so my expectation is we'll hear today and the 0.10.2 will be published out, but as I say, we're still waiting final word on that. Okay, next topic was a big PR that was published a week or so ago by Shara Howland, PR 2487, which is about SDJOT. So I wanted to turn it over to Shara to give an overview of the work she's done and how that gets used in Acropy. Do you wanna share? Yeah, absolutely. Sure, I can screen share. Yeah, awesome. All right, can you see my screen okay? Yes. Okay, great. So just as a brief summary of SDJOTs to start off with, so SDJOTs define a way to selectively disclose individual elements of a JSON object used as the payload of a JWS structure. I can send out a link to the spec as well here. Let's see. So this is an important privacy preserving feature for JOTs. Otherwise, the receiver of an unencrypted JOT can view all of the claims within. One thing to note is that the issuer, in this case, decides which claims can be selectively discloseable. So the holder can only selectively disclose the claims that the issuer has designated as selectively discloseable. So for the claims that the issuer has not designated as selectively discloseable, the issuer has no choice but to reveal those claims when they present the SDJOT. So if the holder wants to do a selective disclosure and only reveal some claims in their SDJOT, each claim in the payload is hashed so that the verifier can't view it. Here I'll show this example as well from this example payload. Here's what the payload would look like with the claims hashed there and not visible to the issuer. And then for the claims that the holder would like to reveal, they include disclosures, which are the plain text claims corresponding to the hash claims in the payload. So these look like this. The plain text is there with, if it's a dictionary object, you have the key and the value. If it's an array object, you just have the value. This is what the disclosure looks like and this is what the hash of the disclosure looks like and we can see that this same hash appears in the payload. So when the issuer or the verifier receives the JOT with the disclosures, they go through each disclosure and hash it to make sure that the result matches exactly one hashed claim in the payload. And if it does, then they are convinced that the claim is in the original signed JOT, but then they can't view any of the other claims in the JOT that don't have a corresponding disclosure provided by the holder. So for now, we're only invoking the SDJOT's method through the admin API endpoints added in this PR. So we have SDJOT sign, which uses the JOT sign method that was previously added in ACAPI, and then there's SDJOT verify, which uses the JOT verify method that was previously added in ACAPI, as you can see right there. So for now, the admin API endpoints are the only place that we are calling into those methods, but that will expand in the future as additional support is added for issuance and verification. And let's see, I added some documentation as well in the PR. In this implementation by default, all claims at all levels of the payload are by default selectively discloseable unless indicated by the issuer, with the exception of the essential verification data like issued at CNF, et cetera. So if the issuer does not want a claim to be able to be selectively discloseable for whatever reason, they have to explicitly provide a JSON path to that claim. And I walk you through how that works here as an issuer. The spec lays out three different options for how to handle nested structures. So if you have say a dictionary, like for example, this address claim, one option is to just treat it as a block that can either be disclosed completely or not at all, not considering that there are individual subclaims. And so in this case, if you're the issuer and you wanted to go with that option, you would not include address to be able to be selected to not be selectively discloseable because you do want that to be selectively discloseable, but all the subclaims within you need to include them because they're meant to be a block. Structured JOT as well is another option that is if you want address to always be visible, the outer claim to always be visible, but the subclaims to be individually discloseable. So here in the payload, you can see address, but within you can't view any of those unless they are included with those disclosures. So in that case, if you're the issuer, you only want address itself to not be selectively discloseable. Third option is a combination of the first two. If you want the holder to have maximum choice over their presentation, they can either disclose address as a block or the individual elements. So here the payload looks the same as option one, but there are disclosures for address as a whole. As you can see here, and there are also disclosures for each individual subclaim. So that gives you the most options there. If you have a list as well, you can selectively disclose one or more elements to the list just by using indexing and splicing. So in terms of our plans for use, we are implementing this so that we can use it with OpenID for VCI protocols and take advantage of the DidResolution and Secrets Management from Acapai. So yeah, that's my brief overview of the PR. Are there any questions? Sure, this is now Robinson. I have a question about the issue we're deciding which attributes are disclosable or not. I'm just trying to think of some business example use cases where that's a good thing to have an issuer decide that they're going to issue attribute data, but the holder cannot disclose it to anybody. Yeah, so you're wondering why is it up to the issuer to determine which are selectively disclosable? Yeah, that's a great question. I think that's probably more a question for the back authors. I think it is too. Yeah. I just thought I wondered if you knew through your example. Yeah, yeah. The idea with making everything by default selectively disclosable is to make it a bit more easy, easier to use the privacy preserving option and make it just easier to go with the giving more options for the holder to selectively disclose. But yeah, it's a good question in terms of business applications. Great. Thank you. I think one call might be, I'm not sure of this, but I seem to recall this from previous discussions was revocation information so that you might have to disclose. The issuer might insist that revocation status be or a way to get revocation can be disclosed. But then a lot of options on the rest of the business date or something like that. I think there's, Sebastian as a good one as well, expiry date would be a good one that we want to say no matter what I want to present or disclose, have the verifier, sorry, the holder disclose that. That do make sense. Yeah, I was maybe wondering, maybe I misunderstood it. Again, this is probably more for the spec, but the difference between an issuer forcing attributes to be disclosed versus saying that they cannot disclose attributes. So revocation data, I think it would be an example, Steven of something an issuer wants a holder, forces the holder to disclose. Yeah. Right. So things just work, right? Same with expiry dates. I was interpreting some of this as the issuer can also force the holder to never disclose. No, I don't think that's possible. Okay. Okay. I don't think so. Yeah. I think I agree with you that this is intended to specify that what things cannot be disabled, if you will, by the holder. If this is going to be presented, then this information must be provided and that can be controlled by the issuer. That's the way I understood it. Yeah. I think that's right. Thanks. Yeah. And in this implementation, the issuer has no choice in for this, the ISS, IET, the expiration dates, those built in are always visible. So the issuer can't even override that and make them able to be selectively disclosable. Okay. It's a good presentation. Thank you. So you said, you said something about. Your intent to use this with open ID and to use ACAPI for secrets management. Does that, does that mean, does that kind of coincide with the other thing about the issuance and verification not being part of this at this point? Cause you're going to do that independently or is that, are those two things have nothing to do with each other? And I'm kind of dreaming stuff up here. I think those pieces are a bit independent, but I don't know if Daniel or Adam want to jump in on this. Question as well. Yeah, I can, I can comment. So the, the intent with this first chunk of work was more or less just to have like the baseline and the crypto implemented. So we could start building on top of it in, in different ways. It's already at a point where with the admin API endpoints, it is usable from context of having having a separate service that's implementing the open ID for BCI protocols. And then just using ACAPI for the signature and the verification over payloads received through open ID for BCI or open ID for VP protocols. But yeah, we don't, even though it's usable already in the state for that purpose in that way, we don't necessarily intend to only ever achieve that level of support for SDJOTs within ACAPI. There's just a lot of spec work that needs to happen first in order to define attachment formats for the issue credential V2, present proof V2, and V3 versions of both protocols as well I suppose to do issuance and verification presentation of SDJOT BCs. But yeah, this is just the foundational work to start leading us in that direction. So we can have that support in ACAPI. Great, thank you. Well, any other questions? I was going to ask that exact question about, you know, is it, we need an RFC for basically JOT attachments to enable all that? Yeah. Okay, I was hoping that's what you were going to say. Have you ever, have you given any thought? I mean, one of the things that I've been meaning to look at but never have is when we looked at doing an on creds in open, in W3C JSON-LD format, we found it was pretty easy to do. It's just moving data around. Do you think does, you know, has anyone given any thought to whether we can do the same in an on creds dash JWT just like SDJWT? We can have ACJOTs, if you will, with the same technique. The JWT format for VCs, I think it's unique from the JSON-LD representation of credentials in the sense that there's like the fact that it's a JWT kind of implies a certain type of signature has been performed. It's possible that we could define, you know, a new JWT type for the signing algorithm and then have the attached signature be melded and fit into a dot delineated thing just like in JWTs. It doesn't fit quite as nicely as it does into a JSON-LD object, I would say, but I think it's possible. Okay. There must be, though, different signature types. Like you must be able to specify, here's the signature and here's the takeover. Right. So for instance, for our SDJOT implementation and the JWT implementation that we got submitted to ACJOT previously, by default it's using EDDSA, so ED25519 signatures, because that's kind of our native crypto language, so to speak, for the dids and keys owned by ACJOT. But yeah, it's also possible to do SECP256K1 or RSA signatures. Okay. But yeah, those are the algorithms that are supported and defined for JWTs as something outlined by like the JWT spec itself, I think, or like the separate registry style thing. But yeah. So it's, I think it's possible for us again to define one of those signature types and to serialize everything in a similar way to what we're doing with the non-freads in W3C format and cram it into a JWT. At the same time, I think we might, I'm not sure how much value we derive from the formatted as a JWT versus a JSONLT object when I think a lot of the appeal of the JWT for others in the community is that it's simple and there's a well-known set of signatures and lots of libraries already out there available for creating and verifying JWTs. Yeah, the main reason would be for enabling open ID for VCs to use in non-freads. That would be the big value. Yeah. Then you're simply using a JWT, you're just using a JWT with a signature that has a non-freads capabilities. So the open ID for VCI protocol also defines a way to transmit JSONLT formatted credentials, but the way the spec is defined, it's strictly expecting link data proof signatures I think over those JSONLT objects. So there's still might be some spec work required, but the general shape of JSONLT credentials is supported by open ID for VCI. Okay. So the path would be to stick with W3C format, JSONLT, data integrity proofs and use that with open ID for VCs. Yeah, I think so. Okay. Thank you. Awesome. Cool. Thank you. Thank you, Sharon. That was excellent. Great work. So the status of that PR is it's ready for review. There's been at least one review done of it. We need others to review it. So really encourage people, developers, maintainers to take a look at that. I know Andrew went through it yesterday, gave you a couple of comments. Daniel's gone through it. And I think you guys are pretty much happy with what we're doing. I guess we're going to go back to that. And I would like to see a few more people and then let's get it merged. I guess one final comment on that is. So we're depending on a library that was published by the open wallet foundation. That library is currently only available as a. Get dependency pulling it in from. Okay. We have reached out to them to see if they're planning to publish it to Python package index. They said they are not sure when that's going to happen. then we're good to merge but otherwise that's happening eventually. Okay and and that just means we have to keep monitoring that and then do the upgrade as soon as possible after. Right. Yeah. Okay. Awesome. Okay next up Hyperledger and Oncred's Rust and Acapai. Two things have happened since our last meeting. I'll hand it over to others but I'll just highlight. We do have a 02 Dev1 release of an Oncred's RS which includes the latest CL signatures. So I'm hoping that means they've actually changed the versioning and marked the version but they haven't actually tagged and released 02 Dev1. I'm not sure why this split and I'm asking the developers there why they haven't done a release. I think Andrew said I could go ahead and do that which I haven't done on that repository but will if nobody else is moving. So I'll figure that out but Daniel I think that opens the way for you to merge that into the Anoncred's RS. The main the changes made recently in Maine to include the latest CL signatures code so that can now be merged into the Anoncred's branch. Is that am I reading that right? Yes. Excellent. With that release available we'll be able to make the same set of changes that were made on the ND credits to the Anoncred's agnostic ledger agnostic side. Yeah and so the tails file handling is the main place that happens that you no longer as an issuer need to have the tails file available for issuing and revoking credentials instead just the keys are passed in to the various issuing and revoking routines. So that's good. The other work done is what Jason Sherman has been working on. Jason are you there and want to talk about what you're up to? I am here. So based on all the work that Daniel did about putting getting Anoncred's RS in place we're just kind of filling in slowly unfortunately filling in the gaps there. So we've got the I think and I previously mentioned this that we've got the issue credential API and present proof API implemented using Anoncred's in the background so there would be no real changes to anybody programmatically using that. But now it'll be Anoncred's in the back end and I've been slowly very slowly working on the revocation API which is putting some changes this morning I think which will knock that off with a huge caveat that as part of deciding what to leave in the revocations API we took out a lot of the maintenance slash repair kind of calls which came in really handy last week for a certain use case so that we'll have to discuss later what we want to do with that is it's great to have everything automated and simple but what do you do in such a case that things go awry? Anyway there's still quite a few tickets remaining around that it's a kind of a long-term project but just knocking things down so I think we will be getting into the stage where we're kind of addressing things like how are we going to do migrations and stuff like that is from an API perspective other than those repair type endpoints that I just mentioned I think we're looking pretty good and now it's on to kind of really fixing up at the lower levels like how do we merge or move data from into a non-credits and stuff like that from existing systems and things so at least the ball's rolling anyway so we're making progress for my tiny little brain it was it's quite complicated so but the revocation is the hardest part so getting past revocation I thought but when I saw your note yesterday and the couple of tweaks that Daniel suggested which looked wouldn't be a big deal that is a big milestone because revocation is where the changes happened the most in transitioning from credx to a non-credits rs so that's really good the second oh the one thing I wanted to ask you is rotate still supported so rotate came in after we specced all this stuff out so currently it's not but I'll have to put in a ticket again we're gonna have some kind of cleanups in here like I said to address the manual repair stuff I guess and that should include rotate is what can do so that is not in but we will have to decide what we're doing with that whole kind of section so I can either do it or leave it as a separate ticket I kind of want to close this this ticket yeah no I agree that's a separate ticket rotate will be needed I don't yeah I think we're okay without the others but rotate will be needed yeah so I'll add I'll add that into our into the project list yeah just to make sure we don't miss that but yes I noticed that that one actually wasn't in there because we started started this project before we did the rotate yeah so just so people are aware where the little story comes from when I started this agenda we talked about the zero ten two rc zero and the two patches that are in that from ten one the where we discovered the problems in ten one were in was in upgrading a dev environment to to use ten one or to tend to use ten one and and sort of screwed up rotating some wound up screwing up a couple of revocation registries um so when we got zero ten two rc zero and and we're able to properly do it we repaired that environment and in doing that we use the tools for repairing which is just the end points for manually manipulating a revocation registry those that's the first time I don't I think we've ever used the manual um methods wait in fact found problems in the documentation because it had never actually been done so I'm I'm less concerned about those repair capabilities but rotate will be needed um so anyway that's that's the story behind both where we are with um ten two and and the revocation pieces a miliana yeah I I kind of I agree but I would just um let's have a discussion like also maybe with Wade about their repair capabilities there are a few different scenarios also for other yeah unsafe operations that we were discussing we should put in a way of like getting out of trouble yes um the the main one is the ability to um have an operation that that takes effect in the wallet but doesn't take effect in the ledger correct with a mismatch that's the big one we've got a address yeah exactly yep um as I say the the main thing for for Jason right now is to get the core of it working and working in an automated way the idea of adding additional um tasks to add more capabilities is easy enough to do um is uh is sorry is the approach to do let's get the big picture working and then we can add additional features all right any questions on that progress okay um these two issues I've briefly looked at this but really haven't done anything but I did we we talked about defining the behavior the proposed behavior I believe for so these two prs are um changing the uh so currently there was an old behavior and a new behavior the old behavior is the default the new behavior is um uh invoked by adding two startup parameters dash dash emit and then either um you know hdps prefix or emit media types these take effect the message is the message type prefix so it's just a string it's nothing more than a string it looks like a url but in fact it's just uses a string for matching um this was changed years ago in the community and acupy still defaults to the old way and everyone's got to have this um flag in to use the new way um media types is the same thing the media types go into a did doc and um uh we should be using the updated um but we but not all implementations use the emit flag they're just not aware of it um so the proposed behavior and I just wanted to review this with with um folks on the call document what is there today which is we default to the old and if we have flags to the to emit the new behaviors one one flag for each of these two issues we're going to change the default behavior to emit new as if the flags were always set we're going to leave the existing flags so we don't error if the flags are used but they effectively do nothing we are not going to add a new flag to enable the old behavior um because and and the rationale for this and this is what I wanted to check with the community is the rationale for this is that because all of the known areas implementations can accept the new prefix and can accept the new media types there's not really a need to um uh allow for to require to enable the old behavior it's been many many versions that we've been allowing accepting both um that we really don't need to go back um anyone have experience on on whether this is a problem we're good with this that's what I wanted to see a thumbs up from Daniel anyone else good stuff okay excellent okay um we'll go ahead with that one community happenings um I wanted to um raise awareness of a hyper ledge hyper ledger labs proposal that mike lauder has put forward called agora link is in here what this is is effectively all of the work that mike did over the last couple of years in various communities related to um what amounts to a non-creds v2 so basically this would give access to open source code for um building out uh and on credits to from a pretty solid base so we're starting to look at it there was a few comments put in I just let's see um I don't we've got other pending approvals and I'll ping them because I'd really like to get this merged but what this does is really open the door for an on creds v2 work to start from not a a scratch or from an on creds rs but from a um a pretty solid foundation of existing code so do take a look at that for those that are um able to or interested in and you'll hear more about this over time I'm assuming this will be accepted um the chair of the technical oversight committee at hyper ledger has approved at Tracy curt we're looking for um the remaining lab stewards as they're called hyper ledger lab stewards to um agree on this one and then we'll move it forward questions or comments on that okay um did peer progress and discussion um our developer on this um Jason Syrotuk has been off for a week and also has some other work has to get done so we're sort of paused on on where we are with did peer but in the community there is a pr been published that I'm hoping gets merged soon to the did peer spec that outlines the um the did uh did peer for method um and then there's a reference implementation um that daniel bloom has implemented that's available so this is um pretty great stuff the I think it's a compared to all of compared to the other did methods I think super clever thinking from daniel and sam on this and compliments to you on really just looking at what's been done looking at the implementation looking at the effort and saying this there's there's a far easier way to do this so that is great stuff um and then the rotate did pr in areas rfc has been added um by sam uh it's been reviewed by a few um we do need some a little more community look but I'm imagined by uh tomorrow's areas working group meeting it might well be ready to go um and then daniel has put a work in progress daniel bloom has put a work in progress into akapai to implement the new protocol which is awesome um daniel do you have any comments on that what what you experienced and thoughts on it um overall things went pretty smoothly with the implementation uh I I had only a couple of minor suggestions of some additional error codes uh in the event that you resolved the did but were unable to find anything that enabled did communication within the resolved document okay but yeah that was the only thing that I provided some feedback on back to the protocol upstream um so in terms of protocol I uh and kind of my main goal with doing an early implementation was to fill out the protocol and make sure that there's no unexpected like exactly touches um yeah uh so I think that turned out pretty well um from the akapai side there were some considerations uh in terms of like caching the connection targets and how we go about invalidating that cache in a way that will actually propagate across the cluster when you're right um yeah anyways so there's I put a bunch of detail in the work in progress PR if anybody's interested in that discussion but yeah overall went pretty smoothly thank you for doing that that's awesome it's really important to get um implementation experience in parallel with um with protocol definition we've really found that over the years when we get on our high horse and implement a define a protocol without actually having developers build it and have to go through what the spec says we get into problems and so that work you did this week was was key yeah absolutely excellent um AA th test is still failing um because of the poetry commit um we've narrowed it down to um AA th terminates the um akapai instance and then restarts it between tests and it's on the terminate that it's failing um we just don't know why yet so we've narrowed it down we did find um switching to ask our made it uh gave out more information and we should have switched to ask our long ago in AA th so that will happen um very soon um but in the meantime this is still an issue so um I've got to figure out how we can make progress on this one um uh Ian Costanzo uh did take a look at it and um you know basically came up with that same evaluation but not clear on why that restart um is the issue Jason yeah I was just going to say that I haven't really been looking at it because I've been trying to concentrate on wrapping up this but maybe after this PR is complete I can hop over there and spend some time yeah I just yeah my head was completely somewhere else and that's why I wasn't really looking in on it yeah if it's Gavin look at it unfortunately both Gavin and um Ian struggled to get their environment running the test which is weird I was able to run them trivially we'll see if I have the same issue I guess as always I'm a little worried you're going to have the same problem Ian did get it to work um I'm not sure if it was Windows environment related I know Gavin was on Windows um and using WSL um but yeah yeah so I think after this PR I'll maybe I can just take a break from a non creds for a second and hop in on that yeah I'm there so yeah um Ian's suggestion just so you know was simply to um change A A T H um the the um acupuncture back channel to use poetry instead of Pip and wasn't clear why that would make a difference so it's not a not a great suggestion but it isn't a bad suggestion either yeah what was that I just said it's something to look at anyway so yeah yeah yeah okay yes sorry I'm uh doing something up for discussion yes go for it um this is did Keith yeah let's see it there is one other issue I was gonna see if I could grab here real fast um it's not jumping out at me in the issues list though yeah um okay I'll just describe the other issue so this one is um so we we recently added support for being able to publish endpoint information from acpai when there's a mediator in front of acpai so that the service endpoint is properly set to the mediators endpoint and then also have routing keys included uh recently came to our attention that uh we're still putting base 58 encoded keys instead of did keys in that value that we're publishing to the ledger and then a related issue um is uh uh in the mediation grant webhook that is getting published to controllers um the routing keys uh published in that webhook are base 58 encoded instead of did keys um and that's not because they weren't sent as did key from the mediators because we're storing them into our wallet in the route records and in the mediation records as base 58 encoded values instead of did keys that was uh that was kind of a transitionary step from what we were doing previously to um supporting did key within the protocols um but now we're we're running into the limitations of doing that partial like normalizing to base 58 and then storing it as base 58 so to minimize code changes but yeah we're we're coming back to that um and and we're working on adjusting things so that we're storing did keys uh which means that we'll be able to publish did keys as well as report did keys and in the webhooks um I actually had uh so alex on my team has an a draft pr that we're working on um it's on ndco's fork back right now so I'll just drop a link to it in the agenda um there it is um and it's basically just going through and just swapping up the normalization and still accepting base 58 encoded keys for continued backward compatibility but instead of normalizing did keys back to base 58 we're normalizing base 58 to did keys and then making sure that we're always interacting with and storing did key values instead of the base 58 um the one challenge that we're actually experiencing right now and and the question I wanted to raise was um I think it makes sense for us to uh for the sake of not having you know bandaid code around for the rest of forever it makes sense for us to go through probably an upgrade stuff to update old records within the wallet um to modify those values to be did key instead of base 58 um so when we pull those values out of the wallet we can expect them to be did key instead of you know having a translation stuff every time um so we were going through the upgrade process and trying to peel that apart see where the right spot was that said like insert that logic in um and there's a a like resave records routine that's that's built in and then there's also like a define your own executable route as well within the upgrades yep um is there like strong feelings on on which ways the right way for that it seemed like with the resave records we would have to do that translation always in the record itself anyways because to have the resave actually have any effect we would need to deserialize from base 58 to did key and then serialize back to did key and so that bandaid code that I was hoping to avoid actually ends up sticking around in that scenario still okay so you're saying the script right now calls resave and but then we'd have that and resave always right so when we when we hydrate like a route record or a mediation record um we would always have to perform a conversion from base 58 to did key um and then on the resave routine when it saves it back out it dehydrates again and and when it dehydrates that time it'll store it out as did key but I I'm hoping and maybe this is a vain vote I guess but I'm hoping to be able to not have to worry about anticipating values to come back as base 58 values and always having to do that check within the route record or the mediation record for the rest of forever so you're saying in the update script have it read in a record do the transformation in the update script yeah save it so that the the load and save doesn't have to think about it it just gets the right yeah so that that would mean that we would write one of those custom executable uh routines in the upgrade script that would be filed away under the the upgradeable things for the rest of forever but I I'm not sure if that is the right approach so I was kind of hoping Sean Jott would have thoughts or opinions on on how we should go about implementing this for the upgrade since I know Sean Jott's been involved there so if I understood it correctly simply resaving a particular record type won't won't solve the issue like you have to have a custom like a conditional to check that out and verify that right to make some manipulation okay so in that case I think like the custom executable is the way to do that okay that's what I was looking for thank you I appreciate you so so with that you would basically do an invocation of it you'd start up akapai and say fix the wallet is that how it would work right um yeah as an upgrade script since yeah yeah okay so we're saying we're now expecting that all route records all mediation records will always have to key values in them and in order to achieve that state we go through the upgrade step to make sure that records on existing deployments that were stored prior to this change um are consistent with our our updated expectations for these values how long have we handled did key in in routing uh for at least the year um I think it was a good answer yeah so we've accepted did key in the protocol in messages but we right continue to store them out of space 58 okay okay good I'll I'll put in some notes into the thing for this one that's great um did you remember what your second issue was uh I I know what it is I haven't spotted the I'll see if I can uh link that as well real fast wait do you mean the ones I put in the chat I put two separate issues I yeah 23 57 is the other one that's right yep so this one was more a uh there's an inconsistency between what the open api um spec is the finding is being expected to come back from uh like the mediation record it's it's expecting it to be did key but we're still omitting a routing key as uh base 58 as a base 58 yeah so this will be solved by the same correction okay okay so yeah same root issue as the other issue when we fix this should we be putting a release out think about how to how to how the release coordination goes with this because I'm thinking we want to update the version because the version would right the trigger the upgrade trigger the upgrade hmm okay I'll think about that okay good um we're almost done I think that brings me to um I think the next topic for two weeks from now that I think a plan which is um um I think we are very close you know our our goal was always with that 1.0 was going to be um AI P2 and I think with these changes were pretty much there um with the exception of please act um and I believe the uh somebody um there's a possibility DSR um has a has a new developer that was going to work on that um as a as a startup task so they were looking at that um I'd like to use that in two weeks from now to go over um the checklist of things we would need for a 1.0 release um so I will plan that if anyone has thoughts and ideas on what that would mean this sort of ties to the long-term support and the um conversations that was had at the last ACAPA ACAPA and at last week's ARIES working group call so um I will plan that for um our our next ACAPA meeting two weeks from now and with that we'll wrap up anyone have any other comments they wanted to raise before we close uh one final request so you mentioned on the 0.102 um RC0 that you're having some resolution issues um I would not mind being tagged and absolutely yeah um the message came up on internal chat last night and I've asked um the developer to raise it and to flag you on it so okay cool you'll see it yeah great yeah none of this is the first hint of anything related to to the actual resolution stuff all of the other had to do with this revocation messing around and and so on so once we the revocation problems came about because of the the sort of the two issues that we've patched right um but this is the first one that had a hint of that so we'll take a look at that today okay cool excellent thanks all have a great day have a great week thanks bye bye thanks buddy