 Welcome to the May 30th Aries Cloud Agent Python user group meeting, ACAPUG. A couple of things on the agenda. We're going to talk about an on creds in ACAPI and a brief update from Indicio. I want to go through a few ACAPI PRs and see if we can do some let's play. Can we merge this? See if we can get some of those off the list and get them merged or push back for other things. As there's time, I want to do a, I got a little presentation on converting from unqualified periods to did peer three. And so I would like to push on that. We are recording the call so I'll post after perhaps with excerpts grabbed for specific topics. We'll see how that's needed. A reminder, this is a hyper ledger Linux foundation meeting. The Linux foundation antitrust policy, which is in front of you on the screen is in effect, as is the hyper ledger code of conduct be good to one another. If there are any, is there, if there is anyone new to the community to the meeting and wants to introduce themselves and talk about what they're doing. Now would be the time as well if you have any announcements, or want to suggest agenda items, please do so. Grab the mic. Wade Barnes. An announcement. So hyper ledger. They want to do a contributor maintainer outreach for Indian areas similar to what they do for the fabric teams. So if anybody is interested, if you could contact Sean Bohan, I'll put his, you know, address in the chat. What does that mean. I don't really know what it means. I think I think they want to just get some more. It's more excitement around the projects and various things and get feedback from the contributors and maintainers on know how the project's going and what they'd like to see change. Okay. There is a coming out of the open source summit in Vancouver there's two surveys that I've noted. One on on Linux foundation and open source in general the state of open source. That's at the Linux foundation level and then there's also a hyper ledger survey. That was sent out that they're looking for understanding of what hyper ledger is and and and how it's perceived. So that might be one of the things that's that they're looking at. Anyone else want to grab the mic right now. All right. A reminder that the occupy.org documentation site is up. There'll be more evolution on that I've been away for a while but I want to get back to adjusting some of that so I'll spend some of my time. In the near future doing that and would glad to introduce anyone else to what it takes basically it is a. It grabs all of the MD files organizes them and publish them as a website. So if you're looking to use see what's in in, you know, a markdown file in the acupuncture repo it's it's a whole lot friendlier a place to go rather than get hub. To try to navigate through the folders and find the repo the the read me's and things like that it's a much easier way to go so encourage that to get used. We have an an on credits workshop tomorrow May 31 the link to register is here. So, Redolfo Miranda Miranda. Patrick St. Louis and myself are going to be presenting that. And we're going to be playing with an on credits hands on but also going through some a lot of the background and what's coming with an on credits. Invite anyone to that. With that, I will turn over to Daniel to talk about. How how things are proceeding with an on credits and it's good because I've got a dog now telling me he wants to go outside so I need to deal with that so over to you. Do you want screen Daniel. I will be pretty quick. I've linked to the document that I'll be talking through and there's just a few bullet points so it'll be pretty quick. So I'll, I'll not screen share this time around. But yeah, let's see. So, as a brief update. I've got a doc linked from the agenda. You can go through I've got links to a few things that are relevant to our progress here. But we've gone through and done a number of things lately. We've updated the non credits RS build we're using so we're now using ones that are actually being published from the non credits RS library which was an early challenge that we had with the project where there was a lack of support there. We've now gotten over that problem. We've gone through and updated the tails file handling. We are now using the hash of the tails file as the file name that gets uploaded to a tail server. And then there were some associated changes to the end detail server implementation that we made to support that. We've currently left the original functionality that's retained within our changes. There's a newly introduced a non credits sales server component within that uses the new behavior. We haven't added the intelligent switching between the two based on configuration or anything like that yet. So that'll need to be a further refined, I think, but we've gotten the tails file upload, resolving the circular dependency that we're experiencing previously. So that's all in place and working well at this point. We have basic checks on the tails file that gets uploaded just to make sure that, you know, basically matches the hash. But we're also investigating whether we can do any deeper validation on the offer. Getting some feedback from you, Steven, I think on the mic. Okay. And then, further on here, we've got so our MVP for the revocation word, we're nearly complete with that process. So when I say MVP just getting the absolute basic steps down, we haven't gone through the automated flows just yet. But just getting the basics of being able to do all the setup for the non credits objects. I've left that step off I realized but we can do issuance and presentation all the way through. Without provoking the credential because we're actually that's what we're working on currently is the publishing of revocation updates. Yeah. So that's our focus for this week is finishing that off, as well as getting into the automated setup of revocation registries. And just in general, further cleanup of the implementation so far and in a lot of cases the existing implementation of revocation just to get things where we need them to be. And we're also planning to start digging into updating tests and stuff as well. So, yeah, and then below that I've got some links to our current PR DAC pilot. We haven't updated that branch in a little bit since we've been working on a different branch which I also have linked there. Excellent. Okay. Good. Thank you. Any questions from anyone. Awesome. I can't wait to have that available. I can't wait to be done with it. Yeah, I can imagine. Excellent. Okay. Thank you. All right. I did want to mention this I left this we're not going to have an update. We are continuing to make progress in the use of redis with acopi with the acopi based mediator so those things are proceeding and that's a good thing. But I did get a message from the team at BC gov they wanted to remind people don't ever use info logging in production. You want to minimize the logging necessary. Fun story Wade's got that he activated a dev instance of of acopi with logging and mediator and did a bit of load testing and with info set it absolutely blew out a logging system on our on our open shift and we got calls from the platform operators to say what do you do and you're you're sucking up this space so go ahead Wade if you want to say something. Yeah it was something like several million. Several million logs per hour that they were, they were looking at getting and it was like, it was just blowing up the Alaska search database. So, make absolutely sure you are not using logging in production or any sort of system like that and be aware that especially from what I hear especially ask are the logging is extremely high and it probably should be adjusted and maybe will be adjusted. To change a lot of the info logging to be debug logging to make that less likely to happen but anyway. A worthy of a all cats reminder. Okay, can we merge this. I want to jump into acopi PRs. And see if we can get some feedback going on this. This one came up today. So we want to drop Python 360. Anyone shocked upset by that. With a comment on that idea. We've used three, three, six since since the beginning and never really changed it. Long overdue for that. Okay. So, Daniel, you've, you've created this you just reported just before the meeting that some of the integration tests are not working. Is there that's the second time I think it's been canceled after 360 minutes. Yeah, there's a something's timing out on the integration tests right now so I need to take a look at how the image changes I made are impacting that I suspect it might be just some quiet failures on on container or something like that. But yeah, I only noticed that this morning so I haven't had a chance to look too much at it yet. I did also comment here. I've got a number of things going on at the moment so if you happen to have some time in our field and really motivated to look at this contributions are welcome. Yeah. Jason. Does this include changing all the demos. The demo, the lot of the demo images are three six. Right. So the integration tests are actually using the demo images and when I the initial changes that I made actually I failed to include the demo images and then that promptly caused the integration test to fail so this has adjusted that. But it seems to be imperfectly adjusted at this point so but yeah those those images should. I'm just just curious because I was just having some issues with the trying to put in newer libraries into the demo images and whatnot so just curious about that so great thanks. Um, I will see if there's someone on our team that could take a look at that I'm thinking. Syro maybe you could take a look at that given you were looking at integration tests relatively recently. Maybe that's one you could have a quick look at and see if you can help out Daniel. Yeah, I will I can take a quick look at that sweet. Okay, good. Good. And then any anything we can do to push this forward. That would be useful I think getting this done sooner than later would be a good thing. Okay, agreed on that now. I'll also call out that after we get this particular PR merged I think it will make sense to go through and do a number of updates on dependencies as well. Okay, we're kind of behind several of our dependencies have been updated to drop Python three six port so we're actually on older versions of those. Okay, you keep that so. Okay. Thanks. Thanks for that reminder I'm going to quickly write myself a note. I want to get a an issue in on that. Okay. Good. This one I think should be ready to go did a little looking at it yesterday basically what this does is, we have a script that is available to update the open API. This is the latest version this adds. I believe what it's doing is both generating a swagger a open API 2.0 and 3.0, and just adds the latest version of the generator from an older version so speaking of dependencies that were old. I've looked at this compared it. I think we're ready to go and approved it does anyone have any thoughts anyone particular that's used the open API generator stuff I just that's the only reason I was holding back on this was, I don't actually use it. So is anyone using it and and have any comments on this or concerns about merging this. We also had a bit of a conversation here and I'll probably put these things in about we're getting way too many warnings and error messages from it at the. It works to use it but it would be good to get this cleaned up. So, Merit Merits had a number of comments in there about what to do about that so I'm hoping to get that into an issue and and get that looked at so we get some approval on or get some movement on that. I'm going to go ahead and merge this. This will probably the last merge on the call because now we'll have to update all of the PRs that we look at next but anyway we've got that one merge so we at least merge one. Sherman comments on this one. I did notice that we had a integration test failure. And this, I didn't take a look at that so maybe I can take a look at that this morning the failure but Yeah, I'll take a look at that. That was basically it was it was all on the demo side nothing in acopai. That's why I was asking about the demo images because the library that was having the issue I was trying to put in the newest version of the library that they said they fixed the bug in and I couldn't easily do it. So I kind of fudged it but it turned out that it didn't actually fix the bug in the spot we had it regardless but that's why my question was about the. It's getting updated so but I'll take a look at why that test is failing. Okay, you might want to wait till after we've sort out the three six and maybe look at this after the. Yeah, good point. Okay, I think this one's ready to go. It's been approved by several. I don't think team most here but Daniel any comments on that I know you looked at it. No, yeah, this one looked good. So I'm happy with it. All right, I'll update the branch and get this merged. So this is something about base wallet and non. Yeah, okay. We had a few issues like that. This is a bigger one. Comments on this one this is from Darko to add 25519 signature 2020 support. This one is dependent on the 360. So, or 3.6 so we'll probably follow up with that any anyone with knowledge on this one and want to comment. Thank you to take a look at it if you could. Given the nature of this type of change in your knowledge on signature sweets so on. So it'd be good if you could take a look at it as well but as I say I think we'll wait till the 360 and get our 3.6 completion and then get Darko to adjust the items based on using post. 3.6. Okay. Daniel you talked you're ready to have this one done. Interesting problem. Good catch on it. I think this looks easy and and ready to go. Comments and our people fine with it anyone else take a look at it. Yeah, to summarize, there was cash inconsistencies between replica's during did exchange that caused errors in other protocols. To summarize some of the comments that I made in the description, since the connection target for a connection changes throughout the did exchange protocol anyways. And actually some code that existed prior to my changes that was just immediately clearing the cash on every connection record save, which would have occurred at each step of the did exchange protocol anyways. So rather than just having cash be stored and then be improperly cleared later on on a different replica and failing to be cleared on on the replica that was originally answering the request. I just adjusted it so caching of connection targets only occurs on completion of the did exchange. So this kind of sidesteps some of the issues that I think we've experienced in other situations as well. I haven't tested any of those other scenarios. So I don't know for sure. But it makes sense. Yeah, so it. This kind of maybe hints a little bit at like how do we actually want to treat the cash going forward do we want it to be a shared state thing or do we want it to be a local state thing only and only for you know ephemeral values or whatever. Yeah. This goes further in the direction of supporting local only as opposed to a shared state mechanism. Right. And what we really want is. I think one option we want to have available and this has been done with with Redis is have a Reddit shared cash cross. And in DCO, Daniel your team is put together a instance of using Redis for this type of thing I believe correct. Right. Yeah. Yeah. So these changes are partially motivated. We were working with sick by and they were wanting to avoid having to introduce a shared cash mechanism if they could. And this was the only problem that they were experiencing in that in that setup so yeah. Cool. Okay. Any developers anywhere have a I'm going to I'm going to approve it. If anyone else has any concerns with it. Take a look I won't merge it yet we can't merge it yet till it gets updated anyway. And so Daniel, if you could update it from the main branch and then gets retested. All right. Maybe I'll do one or two more. See. Yeah, again, this one looks pretty straightforward. Update the branch and approve and run I just wanted a developer to take a look so this is just a tweak to how how the end grok endpoint gets extracted and used. So a couple of tweaks to a shell script. Okay. I assume this is fine. I didn't I didn't have a chance to actually run it so I was hoping so that's why I put your name on it to take a look at running it. But I think it should be fine so likely that will get move forward. Yeah I was going to spot check that this morning but yeah I don't see any concerns with the changes that are made there. Good. This is a dependent pot change. I imagine. Oh this is in the area you've got Sherman looks like in the playground. I'll probably get you to take a look at that. Or, or, you know, just merge it as you see fit depending on how much you think it needs review. Yeah, just bump in the requirements thing. But I can't do merging. Oh, you can approve and then I can merge. I don't even think I can approve. Maybe I'll take a look. Yeah. I'm going to put an approval on it and then I can proxy knowledge. Okay. Yeah, sounds good. You can communicate your approval. We'll do it. All right. This is a big change that Sherman you did for adding updating from an EFK to an Elk stack. I think this is pretty clean and I've run through it once and it worked great. I think this is ready to go I would suspect do you have any reason to think it's not ready, Jason or. We're ready. No, I mean, I was using it a lot so do you know all the number testing with the Redis and the mediator etc etc and then all that debugging with the trace log stuff. So it worked for me. Yeah, it's basically just an update. The EFK one was so far out of date that this kind of updated and made a little bit more flexible to integrate into run demo and all those situations so someone else was able to follow the instructions and get it to work that's good. That certainly doesn't have it there doesn't have any impact negative impact on any other chunks of code there's no like hard dependency on it or anything so right just another tool. Cool. All right, I'll get that one reviewed as they say I've run through it on my own machine and it worked great so happy to get that one push forward. This one. I just threw it had some back and forth. Looks like this is close to resolved. So we're looking for an update to base. I don't know if Sasha is on the call. Yeah, this is likely to get merged pretty soon looks like Daniel you've you've been happy with it Timo is more or less happy with it had some conversations so we'll we'll see about that and then get it merged. Yeah, I think there might be some minor adjustments here and there but and I can communicate back to Sasha on the secret team. Where Sasha is from. Okay, good. Yeah. Excellent. Good. Last one I wanted to hit on this one was shanja the one you've been working on on settings per per tenant basis as opposed to on the overall occupy instance you want to talk a bit about that. And do you think this one's ready to go and we should get it finalized. Yeah, it's ready to go. So it's basically like right now we are able to set up startup like flags the startup parameters but we can't do it for at the sub sub wallet or the tenant level. So basically this allows like a modifies the endpoints for sub wallet creation and update so that we can specify those startup settings for at the tenant level. So that's pretty much it. I don't see a single MD file it's updated. Okay, I'll work on that. Yeah, I would say maybe add it to the multi tenant one. MD file exists so I think just adding a section to that and then when I highlighted in the change log I think between the two that'll help. Work on that. I think that takes care of a lot. Are there any other ones down here. This is an open API demo I think I just have to merge that I think it's just documentation. Oh auto remove flags for presentation request. Oh, I see some comments on that that's been out for a while and we need to deal with that one. Yes, I've got a refresh my brain here on that it has been a while. Yeah, so previously just had the. We were only saving one exchange. Yes, the credential exchange. So these basically follows the same pattern. It seems to work out okay with impact. So I think the documentation goes through it pretty well of what to do and we've added it so a change to the credential exchange is that now you can pick which side. So, when we put in the presentation exchange, there's a configuration for each side of the conversation, and we added that in for the credential exchange as well so the existing code works. It's just now there's an enhancement to the credential exchange side as well so preserve the whole side or whatever you can do that. Again, John, is this one of the settings that's inherited downwards and should be goes to the tenant level versus the acupy level. The auto. Yeah, I think so this is one of them yes. Okay. Okay, you've got conflicts here, Jason so you're going to deal with that. Okay, that's been a long time yeah. Okay. That's good. This is one of the issues we're having is, you know, being consistent as maintainers in in getting these resolved. So that is going to be more of a focus in the next while we're getting a lot of updates from a lot of different people and we really need to pay more attention to these and get them closed faster. So, we will be doing this more often in acupug and then I'm going to make an effort to try to bring the maintainers together to make sure we're, we're being timely and addressing these and, and, and working on them. Another note I'm going to be updating the maintainers file. I want to make sure it's accurate, but if anyone wants to be a maintainer and has met the qualifications which is at least five, five pull requests been merged and knowledge and interest in being so absolutely welcome to do so and we'd, we'd love to have additional maintainers that that are willing to take the time and willing and able to take the time to stay on top of these pull requests as they come in and make sure they're getting merged in a timely way and not only that but feedback is given to the, to the contributors, making them that's even, you know, just as important as, as getting them nailed down. Getting them assessed is, is getting feedback to those that are contributing and making sure that they want to continue to make that contribution and update what they're, they've done to meet the needs. So a pitch for anyone who, you know, hasn't knowledge but hadn't thought about it of being an actual maintainer for the, for the framework. Okay. That's that looks like I'm missing a topic in here which is I wanted to talk about before we go and we have a few minutes of pier three. So, let me jump to that. Basically I have an overview of transitioning to period three and I wanted to make sure this is right and then it's likely that one of the developers on the BC gov team Syro on the meeting is likely going to take this next. And so I wanted to solicit feedback and get him some help as he started working on this. I'm getting qualified. So background we have two types of unqualified dids in Indie that's not right. So let me just fix that I had that on my brain yesterday in Akapai. Two types of unqualified dids did solve dids. I don't know why that's underlined but that our public dids on an indie ledger. Again, they would not be qualified and then we have peer dids on a did calm messaging relationship between two parties. So, you know, an issuer holder might have established a connection they exchange peer dids. Currently, most of those are unqualified peer dids. So it's obvious from the context of where they get used, which, which type it is so there's not a need to get a unqualified didn't, and, and try to look it up on a ledger when it's a peer did. So that's not really a big issue but we want to update all areas agents to use qualified dids Akapai agent Akapai based agents and all other areas so we want to use qualified dids. It's pretty easy for public dids we just prepend did solve. I guess I need another colon in there. Once we do that everything should continue to work we already support did soft so that should work but unqualified peer dids are a little trickier and we've talked about this on the areas working group calls so some of this is repeat but I thought it worthwhile to delay the foundation. What we did is calculated in the same way as did solve so select the first 16 bytes of a 256 bit verification key or the hash of it. I believe but anyway, the first 16 bytes of that becomes the unqualified did the the identifier value. But that is not a standard did method. There's no prefix to use for it did colon, whatever. So we'll talk about in the areas community creating one did peer legacy or something like that. But we had another idea. So we'll talk about that how it works during did exchange or using either connections or did exchange. The other party independently generates during their part of the did exchange generates a key pair. From that they generate the namespace did identifier the stuff after did colon, which is the unqualified part in this case. They don't put a prefix on it. They created the did doc, and then they send the other party the unqualified did and the did doc so they're sending two pieces of data in the various calls. Second background background did. Wow, I did these quickly and without oversight. Did peer works by having is intended for exactly this purpose. And it specified that there would be different types of did peers and they would be numbered zero one and two. So far and as of a merge that was done this morning, there is a type three as well so there's four overall did peer zero is a is exactly equivalent to the did key protocol and therefore should probably never be used and did key should always be used in its place since many it is far more prevalent. So did peer zero probably don't ever want to use it did peer one is similar to unqualified peer it is but the did identifier the the namespace identifier is derived from the hash of the did doc and not the public key. So a different way to to derive the public key. Worse, or a big problem with the peer one is that the canonicalization of the did doc is not really defined in the spec. Aries framework JavaScript defined a way to do that, but it's not really defined in the spec itself, which is not a good thing. So that's a second issue and, and has led us to say we probably shouldn't use did peer one. And so we're leaning away from it. Did peer to the elements of the did doc are extracted from the did doc itself and used to construct the namespace identifier so there's a set of rules for how you construct the, the, the identifier. Following, following the to the, the part of it following the to and the in the did. The result of that is that the dids are very long, and those elements are all visible and all all interactions which is not ideal it's it's a very long adds to every message being sent. And in plain text all of the details of the did doc are shared. But the difference is you just need to send it did you don't need to send it did doc because the actual did doc can be derived from extracting the elements out of the did itself. So that's good. So did peer is something new I actually, this is a link to the PR for it, but the pull request was approved this morning by Daniel so it's actually now in the specification itself. So, number three, the idea of that is to get the benefit of did peer to but eliminate the need to send the full did peer on every message. Basically, whoops, derive and namespace identifier from did peer to calculate the shot 256 hash of the did peer to did, and you now have did peer three did colon peer colon three and then the value of the identifier that's the. Just the hash of the actual did peer to. So, after sending a did peer to once we can then thereafter send it the same did peer to every time or we can send it did peer three. And if we send did peer three much shorter. But of course this the recipient needs to understand did peer three so that's a complication in it. Any questions or comments so far. Sam am I getting this right. Nothing yet you haven't yet explained the how you know thing but I'm guessing you're getting to that. So transitioning from unqualified did part one is adding an acupy support for did peer two and three. So, right now, Shanshott did a pile of work to support did peer one matching AFJ I now would suggest that we retarget that work to support did peer to and did peer three and acupy. In particular, and first thing it needs to do is support the receipt of both those did methods. So, suggesting that we sort of retarget that PR that's in there for did peer one. It's not been merged yet and eliminate replace the did peer one element of that with support for using did peer to. And that will enable us to use a transition to using did peer to and three. We do that automatically. So, we, if the other party sends one, we can handle it. If the other party sends a did peer to before we've sent a did over to the other party we should use did peer to. Otherwise, we have a flag that that we can activate when needed to initiate sending it that's always the tricky part when should we send it, and how do we know the other party is using it. That comes as a community court and update, although there. I'll talk about that in a bit as well there's another way that we can get partially through it without using the flag without only using the flag. No, I don't think there is any other way. Okay, never mind. And then, and then we need a community coordinated update transition, unqualified bids to peer three across the community. Oh, sorry. Part. The community coordinated update is at this part that we, we start to move to peer did two and three. Andrew. Isn't this did peer two and three stuff only a concern when we switched to did come to. Or wanting to do something in the old protocols. It's going to be required then, but we should, we should have done this a year ago. Yeah. So it's not just a concern it's just a requirement there. Okay. But like in the connections and did exchange protocols we're sending a whole did document. I think we can just send a did string in there. But I don't know. I think we can make it work. But it seems a bit disruptive to update those two. I'm not. Those. Sorry. Sorry. You think until we go to did peer did come to we don't need to do this. Um, well, at the moment we are we're switching to this did peer. One supports and. Did exchange and I think we're just targeting compatibility with AFJ. So, I guess it depends what they're doing. Actually pioneered this. They've already got mostly written code. It needs to be made a little bit more canonical and then they've they're going to document their approach for the. The transition so did peer it is not canonicalized in the sense that it is. That it is reliably created in order every time and so you have to impose a little bit like it allows you to specify things in various orders. And so the community coordinated update will specify the exact order that you that you use to construct the did peer to when we're doing this transition. So this is definitely an approach that's been pioneered by FJ. Okay. Okay. Um, transitioning unqualified did to did peer three. The goal is to convert existing unqualified dids to be did peer three dids. And I put a question which is kind of like Andrew was asking, you know, is it useful? Do we need to do this? Or do we just continue to support unqualified dids when we get them and eventually don't send any, any more. But anyway, if we did want to actually convert them from unqualified to qualified. I believe what we can do is we already send a did doc from acupy. So that would mean both parties have the did doc. So both parties would then be able to convert that into or to generate from that it did peer to because that is very formally defined exactly how you would do that. And if that's true, then we can derive a did peer three from the did peer to. So that's the idea for that. And with that we would be able to convert all all unqualified did calm peer dids to did peer three. And Stephen. Yeah. So in my mind, we have two options. We can either do the community coordinate coordinated update just to did peer two. Yeah, because that gets us out of the unqualified land that we really need to escape from three is a nice optimization. I was unsure about whether we should community coordinated update all the way to did peer three for support for that or whether we should stop at a two and then allow the existing mechanisms to detect support for three. Since we need new code for two anyway, I think we should just assume everyone's going to implement two and three together. That would be my, my suggestion. This is this is a good conversation for the larger community call but yeah, I mean, this has been an open thing in my mind is we don't technically have to go to three, but do we want to like leverage the effort anyway to make that happen. That was my thought is we should It does mean that we need some sort of within occupy or any framework, some sort of idea of having a synonym for a did so that if a did peer to comes in we find the connection if it did peer three comes in we find the same connection. We do have to have this idea that that this concept that the deeds used in a connection can have synonyms and resolve all to the same one when searched for so that's an interesting side issue that would have to be supported when we have. I think it's necessary as well because if we want to support, you know, unqualified deeds and, you know, the did peer to version of that. Again, we've got to support synonyms so I think that's a concept we're going to have to have anyway. So with that. BC gov. Syro is going to take a look at this take a look at the existing PR and look at what it would take to convert to the did peer two and three. And again, just for that same comment Sam of we're going to do to then we might as well do three. This idea of supporting synonym dids so that when you get a message in and it's got a did different. We could have different dids pointing to the same connection, be able to support that. And then, if, if my theory, which is still a theory is that we can actually do a, a canonical conversion of an unqualified dates to did peer, did peer three. That's the work that we're planning on doing in the short while. And then is there is still not a community coordinate update RFC out there Sam. I don't believe so the animal agreed to write that given their experience with what the code in the method they pioneered so I haven't checked yet today. I'll check on with them and and it's possibility now that I've got the, the vacation in the workshop passed me that maybe I could do a write up on that I don't think it would take long comments questions from anyone. So a quick comment, I don't know how happy we need to go on this right now but just thinking about that for a second. I think it did come be one we tend to rely more heavily on the very keys of the dids. Anyways, so in supporting making that transition from unqualified dids to peer did two and three. Like when we look up connections for instance we're looking them up by the keys that encrypted messages that we receive. Okay, so there might be, I'm sure there's going to be fun stuff to figure out in there still but maybe that realization helps point us in the right direction. A quick comment I had, and maybe this is a bit of a controversial take but and this could just be my perception as well I don't know, but I get the feeling that a lot of our emphasis on backwards compatibility within acupy has encouraged the rest of the community to remain in a state where they're using unqualified dids and things like the connection protocol and stuff in the process of updating for a year. Wow, peer did two and three. Should we. I'm wondering if we should be a little bit more willing to break things aggressive, aggressive at breaking things. That's a kind of pulling because of it. So today we're still very frequently using connections I think in the world today, and that's using did solve blah blah blah that Daniel created only one day in Provo. Right, you know, message prefix HTTP did calm agreed. So what I would suggest on that one is you know once we get to this this PR or this community. Like we suggest okay what should the breaking change be we changed to zero nine or 1.0 of acupy. What do we do for backwards compatibility are we willing to accept but not generate unqualified dids was that sufficient. Are you requiring that when you upgrade to one to the next the breaking change that you must convert your existing ones to qualified to participate. I don't know we'll have to think about that. There's so there's about there's the later phases of the community coordinated update which are are weak or soft by design in the sense that removing support for the old is something that is not necessarily coordinated as a community. But but but everyone is you know required to use the new, which means that I think that Daniel suggestion can be applied by actually setting at least with an acupy deadline for the removing of the old. And that this might be a really good chance to do a little bit of cleanup there and force folks to come up to to current. And we're on a on a relatively quick pace that if they don't there's going to be significant problems anyway and so doing it a little bit early is I think a good behaviors a community to avoid more problems you know problems that will pile up against the conversions did come be to and make that transition even harder. Okay, any other comments. All right. Well thanks everyone for attending. We will be, I will be trying to be encouraging the maintainers to be tracking these PRs that we talked about today. As they come up for either re approval or or just simple simple merging. So please be responsive to that if if you get assigned to look at a PR would be appreciated if you could, and we'll try to get those done. The next call will probably go through another few of these. In a couple of weeks. Thanks all for attending. Thanks. Thanks.