 All right, welcome to the September 5th, 2023 Aries Cloud Asian Python user group community meeting. Lots to talk about on the agenda. Releases published, tests failing, did peers to discuss, and other status reports, and so on issues. We are recording a call, so we'll post that later. A reminder, this is a Linux Foundation Hyperledger Foundation meeting, so the Linux Foundation anti-trust policy is in effect, as is the Hyperledger Code of Conduct. If anyone is new to the call and wants to introduce themselves, feel free to do so now. Just step up to the microphone and introduce yourself and why you're here. All right. Announcements, IW is coming up October 10th to 12th in the Bay Area, San Francisco Bay Area. A number of us will be going. Hope to see folks there in person. Probably we'll start to think about specific presentations and sessions that we want to call as part of IW. That'll be coming up over the next several weeks. The Hyperledger Member Summit is October-October. Get rid of one of those. October 23rd in San Francisco and Tokyo. So there is one in the Bay Area. So there's a link in there to the Member Summit. There will be identity sessions and so on at that meeting. And then a reminder that there is a new Zoom link for the ARIES Working Group call. So for those who are familiar and join us each Wednesday, there is a new Zoom link this week. So if you have in your calendar, the link, the Zoom has changed. So that's for tomorrow's meeting. So make sure you note that. Any other announcements from anyone that we should add to this? Anyone have anything they want to bring the group's attention to? All right. Release 010.1 was published. Notice it's 010.1, not 010.0. When we went to publish, PyPy gave us an error that the source code zip file existed already. Not sure how it could have existed already, but when we did 010.0, so we immediately bumped up the version number to 010.1 and republished so that PyPy would have all of the necessary artifacts. So that's been done. 010.0 has been out for a bit. And then we merged a couple more PRs. And now the ARIES agent test harness is failing. Almost certainly the issue is the poetry link, which kind of makes sense that since AATH's docker file only has the PyP install part of it, that's what's causing us grief. Daniel, you mentioned being able to use the nightly build. And I think that's a good idea. And we could transition to that. The only issue that I see with that is it's really nice to be able to pull in a current commit and run the tests, or run the tests easily with an existing branch off somebody else's repo off a fork or anything else. And we can do that really easily today in AATH. So that would be the only thing we'd lose. I'd like to find a way that we could do use nightly for most of the AATH runs, but to still be able to grab a commit. So we might be able to do both. Probably the right thing to get past this immediate problem is to switch to the nightlies. Any comments, suggestions from there? That makes sense. Being able to pull in changes from wherever and run them in the area's agent test times, I think is definitely something that I agree we'd want to be able to retain that ability. There might be some options there for building from cloned code, building an image, and then just being able to trivially use that image rather than pulling one within the AATH stuff. But yeah, that makes sense. Interestingly, it was doing exactly that that allowed me to realize that it was the poetry PR causing all the grief. Interesting. So yeah, all I did, you go in and just tweak one file, and then you can run the test run in a GitHub action, run a run set, and get it done. So really hate to lose that feature, but on the other hand, just making it easy to use. It would also be much, much faster and way less resources if we use the nightly run. OK, we've got to figure out how to get that fixed. We'll talk about it as a team at some point to figure out who can do it later today or tomorrow to get that resolved. So we've got to figure out that one. The two commits are there, as I say, but it's clearly the poetry one that causes, and it makes sense that it is. So OK. Next one up is deep peer progress and discussion. Is I'm not seeing Jason wanted to talk today. Jason Syratuk, but I'm not seeing him on the call, unfortunately. Daniel. Do you I mean, the two things I wanted to mention, make sure this community knew about it. In case you missed the meeting last week on the Aries Working Group was did peer for has been introduced, Daniel. Some things are happening in the did peer spec. First of all, the did peer specs been updated to remove all of the things that were as I put them aspirational when they did spec was put together. Notably, the ideas of being able to update a peer did and things like that, that just never came to be, came to fruition. There was no protocols created. There was no Aries ways to rotate keys in dids. And as did peer, sorry, as did calm moved along, it became clear that it wasn't needed. And so that all of that sort of baggage in the did peer spec has been removed. The next thing we wanna do is both introduce did peer for and deprecate all of the other did peer zero, did peer one, did peer two and did peer three sort of deprecate those. So that's the intention that we will do in the next short while. And then the last thing is to mention what did peer for is and did peer for for those that missed it was an idea that Daniel Bloom and Sam Curran put together and basically is did peer two and three combined into a single mechanism. Did peer two is a way to express within the did identifier and encoding of the did doc on which it's based. Did peer three is a way to reference the did peer two identifier, but in a short form. So that you are not every time you're using the did every time you are referencing just a short version of the did peer two that was already transmitted. Did peer for has the same concepts, but combines a short form and a long form did where the short and does that in a way compatible or in the same manner, the same technique not I shouldn't say compatible, but in the same technique that did ion uses and any side tree based did method. And basically it has a short form of the did and a long form of the did. They are also known as for one another and it allows us to do the same thing that did peer two and three did, which is on the very first use, you send the long form did on any other uses you can send the short form with the assumption that the peer that you're communicating with has already kept it. So that's what did peer four is. And then so any questions on that? Any comments? Okay. That sounds pretty good. I think I was surprised to find out that the short form did peer three isn't the same as the prefix of did peer two, I guess. You could just strip it off, but I think it hashes it again. Yeah, it hashes the actual identifier. That was the plan. Did peer four also changes how did peer four basically encodes the did doc and doesn't do any of the fancy handling that did peer two does. It just simply says you can use any did doc and it encodes the did doc. And then the short form is a hash. I believe Daniel, what's the short form of did peer four? It is indeed a hash over the encoded document. Yeah, yeah. Okay. Sounds easy. So yeah, did peer two encodes the document, did peer, sorry, short form did peer four caches the did doc long form encodes the did doc. Does it include the hash? It's a prefix or? Yes, yeah. So the full did peer four is did colon peer colon four followed by the hash in multi-based encoding and then followed by another colon and then followed by the full encoded document. Yeah, there's links in that issue there at the top to where that's all defined. Yeah. Yeah, sounds good. Okay. And then the next thing that happened was the introduction and maybe Daniel or somebody could put the link to the RFC PR but the introduction of a new RFC called rotate did for did com message ID. So this is for did com v one which has never had a rotate did. This allows simply a message that you send over that says rotate my did to this. And the expectation is that the other side would simply update their various connection related data to use the rotated did and send back an act. And then we added with that as a follow on to this to the initial proposal is a way to hang up. So the other message within the rotate did is a rotate did hang up, which basically says I'm no longer gonna respond to anything on this relationship. Something we had long talked about as it's not you, it's me or the breakup protocol. Now it's called hang up. So that's all in there. Right now we're wrapping up work to move from unqualified to qualified dids. Jason is almost completed that work and wanted to get together to have a conversation about that, but I guess that will follow Daniel, Andrew if you could coordinate with Jason as to when that meeting will happen today. He wanted to have it today. Well, Jason Syrotuck is here. Hello. Did you wanna use this meeting to talk over the wrapping up of this work or is it such that it's really just an Andrew Daniel you meeting? I think given there's some decisions to make relatively quickly, I think a smaller forum but I can certainly provide an update to the group. Yeah. So yeah, I've been looking into presenting to peer to there's kind of multiple components to that when I got into it and maybe I'll kind of break down what Daniel Bloom. So there's a pull request that's open. I don't know if it's linked. I don't see it there. View for nine is the issue. The pull request is two, three, five, three. And in that pull request, there's multiple changes. So a couple of components of the fact that one we want to resolve, we want to know how to resolve these things. And that's relatively simple. Deconstructing the did peer twos and converting those into did peer threes is relatively straightforward. The other components are the fact that we obviously want to leverage that. We want to detect when we've received it peer twos we want to resolve them accordingly. We also wanna make this part of the community coordinated updates. So by default, it's gonna have its existing behavior. It's going to continue to send unqualified dids unless you've set a specific flag for testing purposes. If you receive a did peer twos, we wanna respond with the did peer twos because clearly the counterpart is compatible and understands that and that's the preference. So that is relatively straightforward. The complexity arise in the fact that Hawakapai has handled existing connections and existing did documents. It has a custom did doc class that relies on some old or expects some old shapes. The context link of how the did doc should work is the right link, but that spec's been updated. And the doc, the old doc, the document is actually using values that aren't in its own context. So we've been really discussing how to update Acapai to specifically to use pie did and its did document specification. It's got much better typing. It's got enforcement and it's up to date with the actual spec. But we have all these old documents lying around. So how do we manage it? Do we put in two streams that use old classes and old methods for the old docs? Or do we try and upgrade or update the old documents to become in this new class? Which there are a couple of ways to do. And we've got, so there's some discussion here on what exactly the best way forward is. And it's also kind of my first time diving into the unit tests, which has caused some pickups. So yeah, sorry, not super prepared for this, but I've been working on this for a while. So hopefully that provides some context. If people are interested in the did peer work, if you reach out to me on Discord, or you can, you can obviously reply and comment to that. Part of the discussion is the fact that that pull request has 45 changed files and it does multiple things. So part of the discussion is to decompose that into more targeted changes that are iterative so that they can be better understood by the community and with a little less risk because we're talking about changing the way connections are established. So we wanna make sure that we have really good confidence and we take incremental steps without overhauling anything too dramatically. So yeah, the usage of these things is the same. The Auduban protocol, it's just the dids and the documents that are being exchanged and how they get stored and how they get managed. So the workflows are the same, it's just the artifacts that are getting passed around. Steven, anything you wanna touch on or is there any questions from the group? That sounds good. And you missed the section where we're outlining did peer four? Right, yes, I'll click on those links and maybe I'll check them and see you later. Yeah, I mean, it's gonna be a very minor tweak from what's going on with did peer two and three. Basically combines those and most of those changes, the implementation will be presumably in the peer data library. Yeah, I hope so. There's a couple things you need to add to that, but yeah, it's open and available so making those updates should be pretty straightforward. Yeah, the concepts are totally the same to did peer two and three, so. Perfect. Yeah. Okay. Any other questions or comments on the unqualified to qualified did transition we're making? Good. Okay, update again, this time, Jason Sherman, if you have anything to comment on the and on Crud's Rust and Accapai, anything you wanna say about that or that will continues on and you're making progress? Yeah, that's really about it. Let's see, it's probably hopefully next meeting I'll have something more to talk about which would know what is missing during the transition or what things people have to be aware of when we kind of publish this out. So that's about it. Yeah, it's ongoing. Daniel. So I just wanted to, I guess, check back in. Are we still waiting on a new release of the non-creds RS library before we go through with emerging main, is that what we're waiting on still? No, it's the, the revocation API hasn't been updated. So regardless of the non-creds RS stuff which we should have the latest one before we do it but there's still the API as it sits isn't fully implemented or finished for what we had talked about in the issue using a non-creds. So hopefully I can finally wrap that up this week and then we can do some polishing and get the latest REST package in there. Yeah. Yeah. The issue with the most recent issue run into was just the mapping of the different revocation items from the old slash revocation endpoint to the non-creds slash revocation endpoint with the idea being we would, we sort of redirect the existing endpoint to the implementation of the new endpoint but that was problematic with the revocation just because there's so many things that changed in the data structure. So now we've got to a point I believe, Jason is you know what you can safely ignore what breaking changes are permitted in particularly in the response coming back and therefore can update both the implementation of the endpoint plus the unit test cases. Yeah. So that's kind of the yeah, where I'm at right now is I've done what I think I can do. Now I've got to go do which I should have probably started with is there's some BDD tests that do the endpoints but not all. So it's kind of, I don't know if I'm gonna have to go back and actually write my own stuff to understand exactly what the previous responses were and compare. So it may be a little bit more work but the problem with the unit tests themselves is when talking about a root is everything's mocked underneath the root. So basically it just tests that are you passing in the right parameters? Does it blow up when we expect it to blow up? So it's not of any use when you're trying to do this to say like, hey, I expect this actual response back. Exactly. So there are some in the BDD tests which I just had blinders on and because they're commented out, I forgot that I commented them out. So that's where I'm back to now. Hopefully that'll really push this forward so that I'm not kind of coding blind, so to say. And then if there's still some gaps I'll probably have to write some scripts to hit an actual instance look at the responses compared to what I'm actually doing now. So, but I think, I think the ball is rolling now and hopefully I'll be a little less frustrated and we'll have some good progress. But yeah, as it stands, we're not quite at the point where it's just the non creds RS package that we need to update. There's still, I've still got to be still back by coding, yeah. Yeah. Okay. That makes sense. But maybe to ask a little bit more pointedly on the non creds RS side of things, has there been a release with the changes that are analogous to those that were made in the IndyCredex 1.0 release? No. So there's a series of tickets still, right? So I believe you and Andrew are gonna have to work on the areas that were touched in both the non creds and the IndyCredex stuff. Right. And sorry, I'm not being very precise with my questioning here. So I'm asking more on the Rust side of things. Oh, okay. Sorry. Yeah. Whether there has been a release of our dependent library for us to pull in yet. Andrew, so the question is a little odd because we, Credex pulls in the non creds Oh no, it pulls in CL signatures as a non creds been updated to pull in the new and non cred CL signatures. Yeah. I think that's all merged. Okay. That's what I thought. I had a couple of PRs updating to newer versions there. But the most recent release on the non creds RS library is only the 010 and that was released on June 2nd. We're still, we still need a build for, yeah. Release. Okay. So there has not been a release. So PRs have been merged, but we don't have a release with those PRs yet. Oh, okay. That's huge. Okay. Andrew, is it the wrappers that are holding that up? Is it the wrappers that are holding that up? There's not much lined up on there. There's a couple of little fixes and a new Kotlin wrapper. Okay. So there's no reason the release hasn't been done. It just hasn't been done. I don't think so. I know the animal guys have been updating everything to Node.js 18. I'm not sure if it was done for this crate, but it could be done in another version as well. Wait, you don't exist anymore? Oh well. I don't. Let's get that on your plate to push through that release. I assume you're the one that would do that release or trigger it. I mean, I haven't been doing the releases on the non creds. Okay. The animal guys have been doing that? Yeah, it looks like T-Mode did the last one. Okay. Although I think I just have to hit the release button. Do it. Well, there's a couple of little PRs. All right, well, we definitely need that done as soon as possible. I didn't, yeah, thank you, Daniel. I didn't realize that hadn't been done. I think when we have that, my time is still limited, unfortunately, but I think I should be able to squeeze in some time to work on getting main merged in to the non creds RS. And I'm hoping that that will be isolated enough from the work that Jason's been doing that I should be able to do that in parallel. But I'll, of course, be communicative about that. And if I run into any issues, I'll let you know. Okay, good conversation, yeah. Andrew, if you could, anything you can do to make that release happen sooner than later, if it's just hitting a button, go for it. Well, there's two PRs here. The JavaScript wrapper, I noticed, wasn't actually releasing any of the strings and things passed over the FFIs. Right, I've been seeing those conversations going back and forth. I thought that was what was holding up. I think that here is ready to go. However, I didn't realize that we have never done a release since the CL signatures was merged and that's huge. We definitely need that as soon as possible. Yeah, that's true. Okay, please push that one to the top. Okay, any other questions, comments on the hyperdesire and non-creds arrest? Okay. Updated, Daniel, you had these from last week. We wanted to talk about the message type prefix. So basically, it is a configuration to add the HTTPS. So this is the switch from for message identifiers. Way back in the day, message identifiers had a prefix of a did soft that Daniel Hardman created. In the middle of a meeting, a long time ago we agreed it would be updated to HTTPS. Acropy has never made that the default. And so you still have to use a command line option or a configuration option to make using the HTTP a prefix. I think we should have a breaking change that removes that defaults it to HTTPS and perhaps adds an option dash dash really old. The code is written so that it handles both regardless. So I think if we're safe enough, not even to put an option in to use the old, but we just switched to the new. Are we good with that? Sounds good to me. It has indeed been a really long time. So I think if there's any implementations that are still using the old stuff, I think they're probably the ones that should be making some changes at this point. Okay, yeah. Okay, I'll outline that change and get this one pushed to the top as far as let's make this happen. Likewise, I think this one, I'm less familiar. I know we're supposed to have, I'm not even sure where exactly this goes. Daniel, do you have use and accept updated media types? Is that in the service block? It's actually on when we're posting messages to the end point. Oh, right. The MIME type associated with that HTTP request, I believe is what that's referring to. Andrew, do you have any sort of where we are with this one? I don't remember if there's a configuration flag or anything to change it. I mean, we've accepted the newer MIME type for a long time, I think. Yeah. I'm not sure we ever explicitly checked and failed regardless of what the MIME type was set to actually. Yeah, probably true. So for this one, we have to put in this MIME type as we send to messages and then that's the change, right? Yeah, instead of that MIME type, we use the one from the spec. I love how this doesn't quite outline what the one is that we're supposed to be using and no link on it should. Okay, I'll find that's in the envelope, right? Actually, there is a flag emit new did com MIME type. So we can just change the default there. Okay. Okay, good. Okay, good. So the two changes are to those two emit flags because I know the other one is emit. We just make those the default and don't offer an option not to use them. Yep. Okay, good. Yeah, we can leave the flags in for now and print a warning potentially. Do what we did for the Indie flag or the use of the Indie wallet which is put a deprecation notice. We can do that same sort of thing into the log. Okay, that reminds me that another thing we've got to do in AATH that I just posted an issue for is AATH is still using the Indie SDK. So we definitely want this one to be changed as soon as possible. So that again should be a trivial change in AATH. The poetry one is more involved. If we do that, the nightly build, but definitely just changing that flag to remove it. I did a, interestingly, we've had a test that has failed for a year or more that no one's ever looked into. And I actually updated it to use a new 090 flag about using a public bid and suddenly that test passes now. So that was kind of fun. Okay, the other one that I wanted to talk about, we, Sam Curran is gonna be talking to some folks at Hitachi in Japan about long-term support for Aries. And I'm not sure whether they're talking about Acapai itself for Aries or which libraries. But I just wondered for anyone, I'm not exactly sure I sort of was brainstorming here about what long-term support would mean. So from what I hear going into the meeting with the meetings not till tomorrow, and actually it's Sam Curran is gonna be on it. I'm not gonna be able to make it, but Hitachi is offering to fund the long-term support for Aries. And so I started thinking about what that would mean of how would we implement long-term support for Acapai. I don't know if anyone else has experience in long-term support products and setting up for Aries. Products and setting what the baseline is, I assume we would find a long-term support version of Python and an operating system and then basically have a version that runs using that version of Python and that OS and then monitor dependencies, tag that monitor dependencies and then cherry pick and update them on the branch, update the dependencies on the branch as they evolve and cherry pick Acapai PRs from the main branch into that long-term support branch. Is there more to it than that? Anyone with any experience on it or anyone have comments on whether that's the approach? I mean, usually when you're talking about long-term support, you're talking about the commitment from some organization. That's the only thing that comes to my mind right now. Yeah. Yeah, that sounds right, Steven, to have, you need to decide what dependencies you want to pin against. Yeah. Which ones you want to float, right? So do you want to be pinned to a particular LTSOS or not? So what things and then there's two flavors of updates that you need to consider. First and foremost, security updates and then the second are bug fixes. And so what's going to be eligible and in LTS, the idea is no features, right? Right. Yeah. Maintaining an LTS I think is the longer the time period is, the harder it is because you diverge further and further from your active code base. Yeah. So I think, yeah, and LTS releases can, like some of them can overlap as well. Like Node.js and Ubuntu both have LTS strategies and they have LTSs that overlap. So anyway, it's not much more to say about it. Yeah. So basically pick the timeframe. So commitment from the organization, timeframe, two to three years, something like that. Plan for the next. For ACAPI, this is the big one. We're very close to having AIP2 complete with the peer dead and the encryption envelope things. We're very close. As far as I know, the only thing we are not supporting is the please act, which is probably a mistake to include in AIP2, but that's the one that needs to be implemented to technically complete AIP2. The OS I realized as I was listening to this, the OS sort of defines what version of Python is shipped with it. So it's really picking it an LTS OS. And that would define therefore the LTS version of Python. And I guess it would be more than target the version of Python associated with the OS and other dependencies, modern dependencies and update them on the branch. And then cherry pick APIs or ACAPI PR suitable for LTS, which is security fixes, obviously new features, no. Okay, that's helpful. Any other comments from anyone? Do we think we have that? I will share that with Sam as we go forward with that meeting tomorrow and we'll see what comes out of it. It would be great if Hattashi does want to pick up that be the organization that commits to providing that support. So Stephen, one other thing that pops to mind is ensuring that the infrastructure required to support the old versions and test them and build them is maintained. Yeah, I mean, that's a lot easier these days. Yeah, but it means that you can't just kind of evolve your infrastructure to support your new needs, right? You need to, sometimes you have to fork your infrastructure and have multiple things running. So, yeah. So one other thing I would say, and this has less to do with maintaining an LTS and more about what it is that you want to promote is the suggestion to the community going to be that, when you're talking about deploying something new that you should be deploying on LTS or should you be like, so what is the disposition of the new stream that is not yet LTS, right? Is it deemed to be experimental or is it deemed to be what you should go on unless you need LTS or like, so the whole positioning of those things needs to be, I think thought about as well. Yeah. Steven, does this also include interoperability with other agents and the maintenance and improvement of the test harness? That's a great one and hopefully a good one to bring up. So what that would do, I think is that, good if I could spell maintenance, we would want to have a test agent that is the LTS versions that are active and those would be run beside the ACAPI main so that in AATH we wouldn't just run ACAPI main but we would also run the LTS version. Oh man, there we go. Okay. Good, thank you. Any others? All of these are great. What would be the criteria for LTS to be not an LTS anymore? Meaning, like for example, ACAPI will have did PR4 or did Rotator, when something has been introduced and the main branch LTS does not make like what Warren was talking about that, this is the version to use, when does it become not a version to use? I mean, as far as I know, it's sort of what was mentioned here of things like the things it, we basically pick a timeframe for the LTS. So is it aligned with to me the LTS OS, which is aligned then with the, things like the version of Python on it. So are we saying that the decision in LTS is an LTS based on time and Python support? Does it not have anything to do with what's in the ACAPI main branch and as a development has progressed? Because we've, let's say we've found something because of which staying on an old LTS branch is really not the best choice for people. This session right now is just the brainstorming of what are the questions to come up with. So not saying absolutely on anything. Hitachi presumably has much more experience in taking on long-term support for something. And so what I'm trying to do is sort of get my head and get Sam's head around, okay, these are the ideas we would think of that come into play with ACAPI and then so that we go into the meeting to talk about these things. So in answer to your question, I don't know. Warren? Yeah, one other consideration is migration. So ensuring that however long the LTS is that there is a migration to the latest and greatest. And especially if you have overlapping old LTSs, it's like how do you get from here to there? Yeah, I mean, we're already thinking about that and already have an implementation mechanism in ACAPI. So I think we're doing that. I think that hopefully should be taken care of in ongoing releases, but yes, absolutely correct. This reminds me, one of the things we've got to do in we need a feature to not run tests when the test agent has not changed. So one of the things we need to do, we're getting exponential number of test agents squared, number of tests run. So we run all the tests against every other agent. We need to make sure that when we have a pair of agents that haven't changed that we don't rerun the tests. So we minimize the number of tests run. So something to think of if someone has an idea there of how we can detect that an agent has not changed and therefore we should not run, or sorry, when a pair of agents have not changed and therefore we shouldn't run the test set because that will be very common with this branch. And it's a big enough problem we have today. It would be nice to solve it. I'll put an issue in for that. Okay, that's all that's- Steven, sorry, I just thought of one other thing which is to consider the effects that there may be on community coordinated updates. Yeah, if you're gonna have LTSs then it may restrict what you can consider for that. Yeah, so it's gonna really change the whole interop story, I think. I gotta say it's kind of exciting though that somebody is interested in this. Yeah, yeah, yep. Well, as I say, I have had nothing more than they wanna talk about this. So I'm intrigued. All right, any other topics? Thank you for that, that was extremely helpful. Okay, well, that's all I had planned for this. Andrew, Daniel, Jason, Syro, you might wanna jump on or I could stop the recording and you guys could stay on this but other than that, that wraps up our meeting. Thanks all for attending and see you in another week for the maintainers meeting in two weeks or now for the next acupunct meeting. All right, thanks everyone.