 All right. It's August. Oh man, that's crazy. There is CloudAsian Python, Maintainers Meeting, Linux Foundation Meeting, Hyperledger Meeting, Anti-Trust Policy and Code of Conduct. Let's be good. I might as well set to edit, but let's, I think we just want to jump into PRs first and then we got to get into planning work. Just opened. I saw this one. This is very similar to another one we had that added another, that added, you mentioned that this adds, oh, the side effect. Wait, you took this off? Yeah. So I realized that where the key list update was being sent from, it didn't need to happen right then. So I just moved it to after when the connection record was getting saved originally anyways. So yeah, I was able to keep the same number of webhooks and saves that were taking place. That's exactly what we did the last time this happened. It was the same point. Okay. Yeah. Anyway, exactly the same. So got it resolved. That's good. Yeah, we just, we were winding up with an extra webhook in the last time. It changed Kim made and yeah, we were able to alter it. Okay. We'll get this reviewed and merged. And we definitely want to get it updated. Sherman, do you mind if I assign this to you? Yeah, it's fine. Good. All right. Okay, this one. Where are we? Okay. So we've got some tests failing. And he's still got this marked as a, I assume we want this. We haven't talked explicitly about this, but I would assume we would want it. That's a pretty significant savings. Have we resolved the regex change? So I haven't looked at this in depth yet. So I've just been trying to follow along with the conversation. But I, that was the only thing that was like kind of a little strange to me was the post processing. I think if we, I think this has already been said in some of these comments, I haven't reviewed them recently. But I think if we wanted to go with a faster JSON utility like this, I think we should just accept that there's not going to be white space and then make sure that that's not a problem as opposed to inserting them back in after the fact. Or if it's going to be a significant issue, like in the case where we actually want the white space when we're displaying it to a user to be human readable, then just not using like altogether, I guess. Is anything serialized and then encoded or encrypted in a way where the white space would affect perhaps or checks or something? Well, that's exactly what's happening. That's the been raised here. I think AFJ has a mission with it. And, and because we were putting in white space, it was causing some issues. What I don't know is the defensive way to do this from a coding perspective is to just deserialize it and try to be the base 64 or whatever. Just try to try the encoding and only if it fails, do the decode add white space and so on. So in other words, assume that it's just going to be right and then only take action if it turns out not to be. And that would basically prevent, allow us to do the optimal thing. So maybe this is something to be brought up. It would be nice if someone could, well, maybe we'll bring it up tomorrow and see if we can get an AFJ developer to talk about it with us and see where this happens. I assume this, this is in the envelope handling, but it's, it's unfortunate that it just sort of says, oh, it happens. Yeah. I know I'm talking to me specific to like a public key or something where you can't like if you're just deserializing it and it looks weird, that's fine. Yeah. I'm wondering if there's anything that public eye would be problem because if we like, right, if we serialize it without white space and then make a checksum and then send both and then they deserialize it with white space and then the checksum doesn't match and now all of a sudden both sides are freaking out. That's like the procedures. But yeah, in all of those you could recover and just say, like the community could just agree, let's try both, but I'm not sure that's exactly also an elegant solution. Is that worth the speed savings? So maybe I think, I think what should be happening in the cases where we are actually doing, doing computations over these values where, whether there's white space or not does impact the result, like for example, a signature over a JWS in the did exchange request, for example, in the attachment of the did exchange request. The signature should always be verified over what was sent, like without any decoding or anything taking place beforehand, because then that should ensure that whether there's white space or not, we're verifying the value that was originally signed. The issue that was in AFJ previously and the version that we were testing against when we discovered this issue was I think unintentionally decoding it and then encoding it again and then checking over that value, which is where the white space issue became apparent. I think if we weren't in a position where we could update AFJ and check the differences, but from what I looked from what I saw when I looked recently, it seemed like at least that particular issue had been corrected, but it still looked like it might be dropping padding on the oh yeah, the base 64 value before trying to verify, which I still think is maybe not the right choice, but I think what's happening on the acupy side, why we're not seeing this issue anymore when we're interacting with AFJ is we've stopped including padding on the on the JWS, the encoded payload in the did exchange request. Okay, so I think really the only thing that needs to occur community-wide, if it's not already clear, I guess, is just to clarify that whenever we are verifying signatures over encoded values, make sure that we're using the payload as it was delivered to avoid any of the white space versus no white space issues. Okay, but I think if we, I mean, in theory, if we make this change and then try Aries Asian test harness before we merge it, we should be okay. So yeah, let's let him finish and then make sure we do that before merging. Okay, I can add some notes to this. I've written a bunch of things down. Yeah, like the speedup is pretty impressive, holy crap. Well, I mean, the speedup, he still hasn't verified how much is actually on the acupy side, right? Like 30% increase, it could be 25% that could just be on the test side, right? So you know what I mean? Like he's saying, oh, the tests are running 30% quicker, but 25% of that, you know, it just be on the test side. Is there, can we do something that allows a fallback to the existing code? I'm kind of paranoid that some teams aren't going to have the resources to change their side of things, right? So that they should still be able to deploy it with the original, like what they're expecting. I guess they just get a little paranoid that it's like, great, we got this thing. It works on all our projects. Even if it works in the test harness is okay that there's other teams that are going to be like, cool, we're going to deploy this thing. And then like, why did our thing go crazy? Because they're expecting it to behave a certain way and they've coded it to expect, whether it's right or wrong, it doesn't matter, is that they're, I don't know, it'd just be nice to be able to flag it and say, oh, if you deploy it with another configuration flag, right? Just to say, just continue to use the built-in JSON stuff, which shouldn't really be too big of a deal if we have this kind of utility class or interface or whatever that is doing the actual lifting, right? I don't know if it's an issue or not, but maybe if it goes up with the community, maybe someone will be like, we've got stuff running that we don't want to or can't spend time changing. I see this commonly in Python libraries, they'll offer you the ability to swap out for a different JSON utility. And it's usually by installing like a PIP package extra alongside it. And then it's intelligent enough to know that if the package is available, then it'll use that. If it's not, it'll fall back to the default. Right, but we have a thousand config flags anyway, so we're just adding another one that would be easier. Well, I'm not saying it would be by config flag necessarily, but by deployment, by specifying like what extras are installed in your deployment, you could configure which JSON module was being installed. But okay, point taken though, that's still another configuration parameter, whether it's a command line argument or not, I guess. It was similar. Go ahead. The last sentence from Daniel was basically what I was going to say. I see what Jason is saying. I would be a bit wary of adding yet another configuration parameter just for backwards compatibility. The main issue being if we do it, then we have to maintain it. We have to maintain it until God knows when. It's tech debt that we are just basically building into our own solution for potentially an indefinite amount of time. If there is a drop-in replacement strategy, like Daniel was mentioning, like, oh, if you want to use a different serializer, just install a package, you know, you would have your Akapa image installed, it's additional pip package, and magically it works. I haven't done that. I'm just going by what Paraphrasing what Daniel said. That would be probably an okay scenario. Right, but in this case, wouldn't our, so isn't the two options, the built-in to Python not imported and ORJSON? Yeah, I think we'd have to have a way for them to deploy and remove. We'd want to say the base deployment is ORJSON, but we need to have, I guess, a path for someone to say, great, I don't want that because it's messing up my whatever. The ways implemented the JSON use cell class, basically, we're putting a flag in there that says use ORJSON, which is imported no matter what, or use the built-in. I think what this is going back to what Andrew was suggesting, one of the comments, which is what Daniel was saying, which is like, we have the utility class, which is just basically masking with which JSON serializer we use. And the utility class does, okay, I'm going to check if ORJSON is imported, I'm going to use it. If it is not important, I'm just going to use standard JSON. And then all we need to do is have unit tests for that utility class in case external library is imported or not imported, and potentially the post-processing. The post-processing being something that we definitely unit test, what Andrew was mentioning, I think is it's kind of like important. Reg X processing is front to error. So at the very least, we need very strong unit tests, validating that we are not doing crazy stuff, but there's still a margin to something that could happen in there, but we're limiting at least our problems. Are there bugs or are there not, I guess would be my point. Is the current stuff we have right or wrong? The question is interoperability. Some workarounds were done at some point in the past in some places for some scenarios that we don't know about that may have been done, not bugs, not but deliberate workarounds. We don't know where they are. We don't know why they're there. And they may not even be there. We're going to be better to just discover where the workarounds are. I mean, it's going to be painful, but get them fixed. Having to rely on space, nomenclature different in JSON seems like a very bad thing to do. I realize the hashes of the results change. That's normal. It's a new character. But as Daniel said again, whoever is sending back and forth information should just take it as is and deserialize it as is, like not using the spaces or not. It shouldn't make a difference. You're just getting a payload. You're hashing it and unhashing it. If it has spaces, good. If it doesn't have spaces, good. I'm oversimplifying. What I'm trying to say is like trying to code around these type of edge case seems like a recipe for disaster to me. For me, like this utility, I think the easiest way is to add, if we really want to do what Jason Sherman is saying is we just add a new config, slow JSON. And it uses the old instead of the new. And that should keep everything the same, I would think. Yeah, you could code a very simple case in the utility. I think it's worth considering to put that option in and then leave it in there for a couple of months and be like, has anybody noticed or cared or has this mattered? And if anybody goes, we didn't even know this change happened, then you can go great. And if you do, then you go, okay, well, it looks like we've introduced bugs and people had to manually revert. So let's discuss it then. Yeah, and we could say we strongly recommend not using this flag unless you really have to. And then, yeah, and let there is a bit reduced or unless, yeah. Yeah, I think that's interesting way too. Okay. What if somebody, I still don't like the flag problem being that we can automatically deal with it with the conditional import there. What if tomorrow I come up with a new library that has some benefit for me and I want to use that, all I would have to do is just like add the conditional import for that library in the utility class, have the library installed and that works transparently versus having another flag that controls what to do, right? Right, but I can document that I've got another flag that says just use this flag. Otherwise, I've got to say, oh, yeah, make sure you go update the requirements document before you, the thing before you use it. You can't use the container images because they've got it automatically imported. So you have to go create your own image. I mean, it's way more work for them to have to be around. Sorry, considering where they should be able to put in any JSON library they want. No, I don't think I just want to explore. I guess if there's a way we can do it transparently where nobody should notice, like, yeah, yeah, like to put the back in the white space. See, I see. I'm going to try running these tests again just to find out what's going on. Yeah, they might have been broken. We had those hiccups with the ledger and stuff is probably right. I don't know if they got ran again after that. Oh, the first set did get run again and it fixed. Oh, okay. No, it's this. So how would you do a negation of a library simply? Because I'm assuming most people just grab the Ares image and say, great, cool. You'd have to build your own image. So I mean, maybe that's one way we can check to see is if we have two images and how many times the because that's the thing is you put in a feature flag or whatever. We have no way of collecting any information if anyone's doing anything. It's a little tough. So, you know, it could be that we have this toggle that's never really used without trusting people to report back and say, no, I still need this or I don't know. I guess my thing is if nothing's actually broken, maybe we shouldn't touch it. And until we know the performance is actually like, like I say, like, if all of the tests are great because we do 99% of these comparisons are done in the unit tests, then it's, you know, we're potentially putting out something that's going to break things for. We can't do something in the test and not use it in that like the tests have to match what we're doing. We're putting out. I think we should put this out. I don't lose a question of that. I just think we I'm okay with leaving somebody with a path that's easy for them to use to to be backwards compatible. Yeah. Yeah. I have not looked at this one. This is ready to go. We can update the branch looks pretty simple. Are we okay? I think this is consistent with recent or or with other changes that have been made just to make the responses consistent across different admin API endpoints. I think this one is even probably more of a benign change because it's on the send reverge def, which is usually not going to be endpoint that's going to be called directly by the API directly. Yeah. So, yeah, I think this is fine. I'll now might as well do that. Okay. Well, good. Where does this one sit, Daniel? Oh, yeah, this is kind of hung up. You stepped into a hornet's nest. Yeah. Intentionally. I actually haven't reviewed the comments from 10 yet. Yeah. Yeah. Yeah, I don't think it's, I mean, this is one where you could put it in out of band or you can put it in the other. So I don't think it really matters a whole lot. Anyway, take a luck when you get a chance, but it's not a high priority. Yeah. Sorry about that. Yeah. So sorry about that. Basically, so some person for Ontario added this OB problem report, and then their change had rotted while it waited to be reviewed. And Ontario couldn't have anyone look at it. So Daniel just modernized it so that it fit in. But then there's the question of is the problem. So it's basically reporting a problem when you're in out of band. And when you're doing it out of band, now out of band is just the invitation, the protocols you're actually executing are either the, are either the, you know, the did exchange or connections where you're establishing a connection or something like presentation request when all you're doing is a single operation. And I think the problem there is, and so the question goes, is the problem to do when you're reporting a problem, is it an out of band problem? Or is it the protocol you're trying to execute via the invitation? So that's the question. And Tim is basically saying, oh, well, we were trying to do it for a, you know, for a time out, basically, oh, we only wanted the invitation to last a certain amount of time. Well, hey, there's no way to respond. Or there's not many ways to respond. But it still gives out the question of, is that an out of band expiration? I guess so. But is it that you were invited to do it? If they respond late after it's expired, they would be sending a did exchange message. So their contacts would already be did exchange. So chances are what you really want to send back is a did exchange problem report saying expire. Yeah, I got you. Yeah. So that's the, that's the, yeah. Give me a while to get to the, yeah. This is, this is definitely our hottest priority is, is, is getting this done. Jason, I think you're pretty happy with it at this point. I mean, it follows the happy path. It interacts with that properly. The questions really come from all these unit tests that appear to be random objects that I don't know whether we need to support on our realistic or not. Andrew has chimed in a couple of times with me on the side about things that are basically, you're not going to, we're not going to see them either. They're, they're, they're completely obsolete, like a, like a base 58 key. And it's the exact same thing you saw and the exact thing that somebody's message me on the side they're trying to get it. Yeah. Who is that? Mergy. Mergy, that's right. Yeah. So they've been messaging me asking finding some issues which are totally valid and things I need to change. Daniel, I really appreciate your review on that. I also have time updating some of those sections of the code. Definitely some suspicious things that I will either need to better document or clarify or remove. So I appreciate you pointing those out. Yeah. And then, and then the question is now that I've, now that I've done this once, do we need to really look at restructuring things at a more did method level? Like for the, the, the one that Daniel pointed out is that the Ascar has a created method, which wants to include the, which wants to generate the var key, the method itself wants to generate the var key. But for did peer two, I need the var key first. Yeah. Yeah. So that I can then make the did doc so that it can then save the did. You can't make the did without the right. So that's where I've added another path through that code that looks quite odd. And that was something that, that was pointed out. So yeah, I'm hoping to, to maybe we can talk about that as a follow up. But for now, I'd rather, I get the thing in that works with some tests that can prove it works. And then explore potentially more general solutions in the future. All right. So keep pushing on that. Let's see if we can get this out and merged. This is probably, you know, up there, it's a high priority with the push to get unqualified dids out of the world in the next few months. This is crucial to the areas community to have this. Yeah, absolutely. No, between merges observations and Daniel's comments, and maybe a little bit of help from venture on what the test cases are valid or not. I'm hoping to be able to, to move fast. But yeah, it's just been trying to explore which of this is irrelevant. All the history and, and that stuff. So that's where the hangups have come from. Awesome. Okay. Thank you. Okay. And then this, these two are both valid. Oh, yeah, all three of these are, are new and valid. I have not looked at this one. I see Daniel, you've responded and, and Marcus, I think Marcus is his name, right? Or Matt is Matt has something. Yeah. Responded. So hopefully we can get that there. This is probably Yeah, I think that failure was from the ledger being uncooperative. Yeah. This change, I'm actually, I'm hesitant to go this direction that's being proposed in this PR. And I think some of the issues actually arise out of like a slightly deeper problem. And Matt has had a very reasonable response in saying that the intention was to just get it working and like reduce the amount of changes because it is a deeper problem that's causing the, the, uh, it's been a minute. Let me review this real fast. So he recently we added the verification key strategy, um, pluggable component. So if you have multiple verification keys, multiple public keys associated with your did, you can have a different strategy for picking out which one of those keys should be used for doing stuff like signing Jason all the credentials. So this PR is adding like the inverse operation. How do we get, um, how do we get, I'm trying to remember now, how do we get the verification key? So the actual key material based on a past and did and verification method ID. And he's put that into the same verification key strategy component that was added, making it a pluggable thing so we can, um, you know, help occupy, know where the keys are to validate to verify a json ld presentation that we've received. Um, but yeah, uh, there's some weird stuff that I think arises from just occupy weirdness. Uh, for instance, the only way to get a handle to like the verification keys by having a did info object. Um, and there's a one to one mapping between dids and bird keys right now. And it, so the interface is just a little bit confused, I would say. Um, and I have the itch that there's probably a better way for us to address this, but I haven't dedicated enough thought into coming up with a better solution than what's being proposed. Okay. And, and the one to one is the big problem. There should be a one many did to bear. Yeah. Yeah, exactly. So even if we just made a change in this, okay, I say just, but this would be a pretty significant change if we made it so that the keys that are getting inserted into the escar database, if we made those be identified by a verification method ID by a did URL basically, that would solve the problem. Uh, but that's not currently how keys are identified in the wallet right now. So it's, uh, yeah, it would be a casual change that we would make. So the keys are extracted from the dead. Yeah. Okay. So we need Andrew to weigh in on what escar should do about this. Yeah, I would, I would appreciate anybody else's brainpower. If you've got something to despair to think about this issue and give input. I just, I probably ought to spend more brainpower on it. I just haven't yet. Yeah, no problem. Okay. Um, just for fun, I'll run the fail job, but just because I suspect it's again, we got to find out if it's the same problem though, right? All right. This one, I think this one's ready. The failing tests, I think are again, the ledger issues. I think we just need to do a trigger rerun. And then this should be good for review. And then Jason, I'll put your name on it again. Although this is LD proof. Hmm. Wow. Yeah. Is there anyone else that can, because I don't think Jason's even ever looked at LD proofs. Oh, you have? No, I don't think I have. I would say let's tag Timo, but he's not been particularly responsive. Don't know if he's on holiday or something right now. Let's tag him and let's put Andrew on it. Oh, LD proof. Oh, let's, let's try Shantra. He's, he's worked in that code before. Okay. Let's go there. And then this last one, I think through a first round of reviews, the maid of God and approved, and then God pushed a new change now. Just a quick, quick fix. Okay. I'll update this, but have we got any approvals on it? No, they were dismissed by his last push. Okay. Okay. So we're going to have to reapprove. Yeah, we review and we approve. All right. Good. Okay. Sounds good. It would be nice if we can get it in soon because this is a big change and it's going to become still quick. These are going away, I assume we'll probably close these, but let's not spend time on that. I'm much more interested in this. So this is an awful lot of work it looks like. Some of it is probably a little less scary than the number implies, but there's, there's a number of things. Yeah. So first thing first is dealing with the stale branch and, and the test that Jason Sherman was trying to put in place. So the idea we had was, let's start by getting that test into the integration tests. But with so much going on and occupy itself, that branch is problematic. What are the suggestions on the strategy for that? So I put in some questions that Daniel responded to you last night. So I was trying to get the work he had done working standalone, which is fine. It does. And then all the existing tests, there's a bunch of issues with those. So I was trying to retrofit the changes to the base classes and kind of push those into the new stack. But unbeknownst to me or I missed the memo is the, that Daniel's thought is that they don't live together anyway. So that the non creds will supersede the existing schema and cred def work, which is what I was trying to like see if I can patch these things in here. So is that what we're doing is we're going to just drop the existing schema cred def stuff and replace it wholly with the non creds. That's what I understand the plan to be. Yep. So removing deprecating the old admin API in points for schema cred def creation and replacing it with the non cred specific or ledger agnostic and non creds in points, I guess. Yeah. So then at some point, like how does that affect all the existing things such as, you know, like Camila and I was on the call traction, right? Right. I mean, it's just pass throughs to things, right? But if they're, you know, their thought is that the third part, the people that are using traction are calling the acopi endpoints. And then all of a sudden those things are going to be gone. Right. This would be a significant breaking change. It's possible to adapt the endpoints to the new non creds interface. I think that is reasonable, at least for schema cred def and then for the automated revocation registry setup. I think we could adapt those endpoints without too much trouble. Okay. Yeah. Yeah, I think that's that's that's after reading your thing. I was like, oh, I didn't know I wasn't aware of this is that plan is we're just going to, you know, significantly shift acopi so things aren't running in, well, parallel or whatever you want to call it that it's yeah. It was a very use loose use of the word deprecate. Okay. So the the issue is how do we do this? What is the the best way to do this? I mean, I guess what we could do just to think this out is to look at all the endpoints and say, you know, which ones truly go away, which is basically anything to do with revocation. And then which ones do we think we can keep as what amounts to redirects to the other with a, you know, with an adapter adapter in between a shim in between to make them still work. So that's, but how much does that slow us down in actually making progress on this? So if we were able to so the revocation stuff is is and always has been like the more complicated aspect of of the non creds transition. So if we're and we have discussed this at length for comfortable with removing a lot of the like manual controller does all the work endpoints and end in favor of the automated setup after creating a cred depth with which supports revocation. There are things to be changed as Jason was finding while trying to adapt those things, there are definitely things to change. But I think the effort to create an adapter from the original inputs to the ledger agnostic occupy or non cred stuff. I don't think it would be too bad. Yeah, that's my feeling to after you pointed that out is like there's very similar pieces. So I don't think it's too far off and maybe that's the better approaches is yeah to take leave the non cred stuff as is and then adapt the existing things in place to use that new kind of model. So yeah, I don't yeah, like I said, I mean, there's a lot of similar pieces, they just didn't quite tie up and there was a reason why they didn't quite tie up. So yeah, so now we can we can kind of concentrate and instead of me trying to force this stuff like hey, this should work and this should work is knowing that it doesn't and figure out how to make it work. So yeah. Yeah, like, we fully want to drop any of the controller based revocation. If we can keep the schema and credit depth, the same, you know, all the other endpoints, the same than we should. And minor breaking changes are okay if we feel like it. But you know, if we can, you know, as much as we can keep them the same, but if it starts to become a real hassle then we we may rethink this but I fully I'm fully on board with removing all of the revered I don't think anyone's using it. So I don't think it's a problem. And we just say they're gone. And if you were using them, here's a hell of a lot easier way to do it your controller is about to get a hell of a lot simpler. We're all good with that. How does so some of the existing proof requests and stuff either have like, they've got like an indie block. Is that that's all okay. So when the new, you know, and on credits was put together, the key thing we wanted to keep was the interact the objects that are passed between between agents should stay the same. So those should pretty much be the same. They should not have changed. So, you know, issuing a verifiable credential, sending a presentation request and sending a presentation, all of those things should be the same and the and the the interactions between you know, getting an offer and so on. I definitely think we should focus on the send and point for the same reason and and at least deprecate the use of other ones. I think that would be a good idea. Daniel, you're nodding. Do you think that's okay? So I think I don't have a problem with that. I don't think that would be problematic either. Especially as it pertains to changes required for the non credits interface, whether it's using, you know, the fully automated flow from the send and point or if it's doing it step by step, the changes to those protocols was actually or the changes to the protocol implementations, the protocols themselves, as you say, are exactly the same as they were before. But just calling the, you know, the non credits folder instead of the indie holder or the non credits verifier instead of the indie verifier, that's like that level of changes required in the protocols at this point at the non credits level. And then any other changes that we want to make in terms of deprecating endpoints, I think is fair game. And yeah, yeah, the only thing that I think that we well, okay, maybe not. I was going to say the only thing I think we lose is the ability to like interrupt the protocol and say, I'm not going to do this anymore. But I think that's, I don't know how common that is anyways. And we still have the ability to send a problem report, but it would have to be triggered by rules engine being evaluated on the controller side as it received web hooks or something. And it would have to be, it would have to interrupt before act by was able to go through the last steps of the issuance or something. But I'm not sure what conditions there would be in place in order to say, Hey, I sent you an offer, but I'm not okay with sending you a request or issuing or not a request with the actual issue credential message. So okay, so let me put it this way. We leave that as an option. And we consider putting into the notes saying we would like to move to send. But having said that, we try to avoid that in this step. So the revocation registry ones without a doubt go away. And we're all good with that. Um, if we get into problems with the, the back and forth with issuing, our next strategy would be to consider dropping the other than the send. But let's, let's see if we're forced into that. Just a better interest. Jason and Jason, in the work you've done, have you, have you done an issuer and do you use the send? Um, yeah. So most of the stuff and, and you know, Milano can chime in if it's changed. But when, when we, when we were writing our own controller for traction, the bulk of the work was listening to the web hook messages and trying to evaluate like, Oh, okay, this is, this isn't right. Let's stop the flow there. So it was kind of basically starting everything with the automated flows and then trying to interrupt if there was some kind of condition or if someone had put in some logic or whatever. Um, and I think it's similar stuff in traction is, is use the automated flows and then interrupts, you know, just like as Daniel's saying is basically someone's going to put something in that's going to be business flow rules, engines or something that says, Hey, this isn't quite right. Let's stop this. But I think everything kind of gets kicked off automated issue. Okay. All right. So, so we've got that from a philosophical. So we have two problems and we don't have much time left to discuss. So maybe we have to get back together. I don't know. But one is dealing with the, the, the stale branch. And the second is how do we order and distribute the work. Daniel, I don't know if the DCO has resources for this. I would characterize our availability to help address this as here and there. It's sometimes we've got a little bit more time than others. Um, uh, I would have some time, I think to like help update the branch, but I think the bigger problem on that front is that the changes that Andrew made on the indie side are not yet available on the non creds RS side. Um, that's okay. Yeah. That's a pending merge. I mean, if the work's been done, it just needs to be merged. In a non creds RS. Yeah. Okay. So there's a standing thing. Yeah, there's an outstanding PR in a non creds RS. Um, that I keep did and is ready to go. Okay. Okay. Um, the biggest thing here is I have no clue what the ordering is on these. Like, if we have one person, where do they start? If we have two people, which two places? If we have three people, which three places do we start? That's the question. Yeah. Um, so I would say, Daniel, that's the biggest thing you can help us with is sort of just look at these and say, okay, if I was me, here's the order I would do it in. Yeah. Yeah. Our keys changes. So that's at the Rust library level. I guess, Daniel, having looked at them, but it's probably doesn't substantively change the work that you did. I wouldn't imagine. No. So Andrew did a change to occupy when he put credits in. Yeah. So that when I tried to do like a, I was just trying to do a quick kind of merge between main and the non creds RS branch. And, you know, because it was so, so isolated, there really was only like the base ledger class and the revocation registry class. And yeah, there was so Andrew had put in the stuff there, which I was like, Oh, this might be a problem that I believe for you guys. So, but overall, yeah, those were the two areas. But I was just going to say that if the work that Daniel did in acopie won't really change too much, there's still that the bridging exercise of what's there and the current schema cred death stuff could get worked on. Like in preparation for like, Hey, this is how we're going to transition these APIs. So like that could get done and probably wouldn't be impacted by any of the other changes. And it's worked. It needs to be done. I guess if we're going, if we're going to go that way to keep these things kind of alive at the same time, I think just to understand where what you've done as we wrap up Daniel, your team, you implemented the non creds API endpoints. Yes. Yes. So we literally have the indie endpoints or the, you know, the schema and the cred def endpoints that exist. We have the non creds input. And so an exercise needs to be done is to cross check those and see which ones are going to be problems and how we might deal with them. And then, and then just for fun, we've got the changes that Andrew's done, which is going to remove parameters and add parameters. I mean, they're, they're relatively simple. I wrote them down yesterday. Yeah. So on issuing, on issuing the tails file is no longer needed on revocation. The tails file is not needed. And the cred def and rev private key, revered private key are needed. Those are the only differences. Right. Yeah. So those should be. And so they're only to do with issuing and issuing and and revoking credentials. So those should be embedded anyway. So those won't be a problem from the interface. Okay. So all we've got to do is let's compare those APIs and and see what we can do about keeping the old ones and which ones would be a problem. Okay. I think the the next like kind of big thing is the migration from existing objects to the agnostic objects. I'm actually really excited about the changes for the non-creds.rs library and the changes that Andrew made in the IndieCredex. I think they actually simplify things in a really good way. So it won't it won't appear at like the admin API level certainly. But underneath in the non-creds interface, there's a whole load of things that we can actually simplify significantly and all the no longer needing to manage tails files locally also solves a number of previously existing problems. So I think those are all going to be really positive changes, but those can be kind of done piecemeal as we add in the as we merge in the library and you know pull it in as a dependency the updated version. I get it. That is a significant change. You no longer have to keep the tails files around. That is significant. Yeah. Okay. Well, we filled up an hour. All right. So I had a couple of things that I wanted to bring up. We don't have a ton of time. So the only one that I think that's that I really want to bring to your attention live is so we've made some changes to the IndieTales server in order to support the non-creds changes. And we're actually unable to create a PR to the BCGov IndieTales server project. We have to be collaborators in order to open a PR to the project apparently. Surely we can fix that. I can take a look quickly. That must have been some oversight, whatever you want to call it. I meant to do that before the start of this. Okay. That one can be done. Anything else? I mean, I'm interested in talking more, but they can wait. Yeah. Yeah. Do you want to have another meeting this week? I'm open to that. And happy to talk a little bit more in depth about any of the issues on the non-creds stack. I can also spend some time prioritizing, I suppose, like giving a suggested like here's probably what should be done first here is what can wait kind of the ordering to those tasks. And yeah, can answer questions there. And then the other things I had are mostly like a kind of a high level, what are our views on like open ID for VCI from the ACFI context and stuff. Okay. I might set up another meeting for Thursday, if you don't mind. And let's see if we can do that. Yeah. Thursday will. Thursday will work for me. I've got time. So we're often in, we get together on Thursday, so it's, we can actually be in person for our team and then include you and anyone else's needs to. Okay. All right. Sorry to run. Good meeting. Thank you. Thanks. Thanks, everyone. Have a good day. Have a good day. Yeah. Bye.