 Hey, I don't want to talk. Hello, hello. Let's give it a minute or two and just need to complete the agenda. Last minute today. Unfortunately, I should have done this earlier, but we can give a minute or two if somebody's going to arrive and also I'm going to turn off this meeting analytics. All right, let's get started. Share my screen. There we go. Let's start with our welcome to May 4th, 2023, or is this going to call? This is our high privilege and trust policy notice. And we can get started with our agenda. So, yeah, starting with good for issues update, which there is a post captioning. So starting with good for issues, which have a quite a quite a traction within a repo. We have a number of good for issues created by George and Bogdan. So this one, this is something we already discussed last week and a tech bash has picked up the one with base what and then we have a number of others created recently by Bogdan. So anybody's willing to learn something and contribute. There's a work. The second one is already picked up, I think. There's a PR for that. Yeah, by. Oh, right. Swap the R guy possible. I don't really understand how this work. I was able to assign swap to the other guy last time tech base, but I can't find swap to her here. So I'm not sure what the difference is between them. But, okay, so this one is picked up to the stream or left. Next up, the mentorship program status. Similar like last week. We are still in the stage of applications until all the way until 15th of May. So, can still submit applications you would like to become mentee in in our mentorship program under Linux foundation. And we can move to our review. So that was a small fix, which is fixing issue for a writing on a ledger. So this concerns issuers or pretty much issuers, not even very far is and definitely not the mobile use cases. This was a bug introduced in 055 zero release. And this is fixed now. Next up, we had two good first issues from first time contributors emerged. Also swaptor and other one tech bash. So props to you guys thank you for PRs. We had some, this is somewhat in progress, but it's ready to be merges from me. So there was some cleanups regarding to the leftovers from FFIs. George, I'm sure you're familiar with this stuff and happy to see this to be gone. Nice. There are some tricks here to keep this passing way to introduce this Agnes. And now that's gone. So have a look at this PR. It's fairly simple. Just gets resolved ugly ugly pieces. Was there a trick to get it to still pass or was it just not needed anymore. Like, I couldn't just remove it with no changes. So actually, to get test passing, I had to do some, some minor modification, I think in one place. Where was it? I'm sure. Where was it? Where was it? Yeah, I'm not sure exactly. It's contained somewhere here. It was after I removed it, I think only one test started failing Node.js wrapper or something like that. So it was really just some edge case, which this is holding up. Yeah, and lastly, and I'm probably most significantly from here from this from for last week, the DID resolver and parser has been like, thoroughly reviewed and addressed by Miro. Perhaps if you'd like to give your time, George, feel free to have a look and leave some review yourself, but from my myself and Bogdan, I believe. Yep. We kind of went through it. Yeah, yeah, don't let me hold it up. I'll try to look but yeah. Okay, would you like to, would you like to take a look or do you think we can just go ahead and merge it? Just go ahead, I think I've been somewhat following the threads and seemed very active 157 messages in the conversation. Yeah, yeah, I was. I think it was one of the most most discussed PR ever. Yeah, so it seems like you guys have been pretty thorough with it. Yeah. Okay, so I guess after after Miro finishes addressing all the points we love there is not much left. Then we'll go ahead and merge it might be even today. All right. And so we have lots of good stuff in progress. So where do we even begin? I'll start with the easiest stuff. So this PR is ready for myself just is missing approve button from someone. It's just preventing attempts to publish a little v6 image when the CIS when the PR is coming from the fork. And it was always making the this fork PRs to fail. So intentionally I pushed this PR from the fork itself and it passed with this modification here. So you guys can go ahead and merge it. Next up. Yeah, we have the V6 iOS iOS CI refactoring. Well, you want to say a word or two here. Yeah, so basically the aim is kind of speed them up. And there are a lot of architectures that are not needed and get compiled, especially for the underlying libraries that are being used, like the Lib sodium and zero IQ and even open SSL. So it's pretty much it and additionally, it's kind of simplifying the whole thing by rendering the build scripts for those two libraries Lib sodium and zero IQ rather than depending on the Ruby scripts that everything created. Not because there was anything wrong with them is just that I guess their use cases were different and and the way those static lips got packaged after the script was done and then we would unpack them and all that stuff. I think it simplified a lot. Right now it's kind of stale I'll have to continue next week with this for technical reasons, but the, it's pretty much done is just, it just needs a couple of rounds of testing. And I can only do that properly on my local machine, like on a local VM running back OS. Because in the CI it just takes forever to get one run going and then some errors rose up and you've got to restart it. Yeah, it's like one hour of work on. Yeah. Yeah, you make one small change and wait an hour to see result. Yeah. Also notably, we are also removing this like entire legacy iOS legacy version of the wrapper, which was like, it's basically these two things are essentially the same thing but the legacy was missing some refactoring which was done by a contributor. I don't remember exactly who it may be on far or Paul, Paul ropes in the back in December, and we kind of wanted to keep the old version running for a while. But now we are removing the old pre refactoring version. So that's going to be gone. Right. And lots of removal, cold removal. Hey, I'm out of curiosity. What is the final build artifact for the iOS wrapper? Is it a framework or is it like a dot eight? It's a framework. It's a framework. Okay. A normal framework or an XC framework. Let's see. Oh, we are publishing here. It's all the publishing zips. Both for device and we published two versions. One is only device build. The one other one includes like should be possible to run in a simulator and cool. And we can take a look what's in there. Do you know if I'm, if we're compiling for arm or M1 simulators now? There are no, there are no macOS builds now. Oh, I mean, like, so I assume this framework has both an x86 simulator and iPhone. Yeah. But there's also simulators for computers. I don't know. I don't know if the, the libraries do get compiled for that. Because essentially like even the ever named Ruby scripts, they would just essentially use some other build scripts built by some people that essentially pick them around to pick the build around so that they would get compiled for iOS, watch OS and TV OS, I guess, like all these, all that Apple ecosystem, but I don't know if there's anything regarding M1s. Right. Yeah. No, yeah, all good. I think this, leave it, leave a note. Yeah, leave a note on the PR. I can have a look at it next week, maybe not add it initially, but at least just kind of scout around and see whether it's possible. Yeah, I'm bringing it up because I've gone pretty far down this rabbit hole before with building Rust for iOS. And the problem with .frameworks is frameworks are meant to represent a single, a build for a single architecture. But people have found ways to basically create a fat .a file that contains architectures for x86 simulator and iPhone. But you're not meant to do it like that. You're meant to have a framework and a .a for each architecture. So with the framework approach, I don't think there is a way to have a universal binary that has x86 simulator, M1 simulator and iPhone all in one. I think for that you need an XC framework. And anyway, it's a bit of a headache. I think what we have is fine. Yeah, we just have the .framework as far as I see. And we do split the static lips into individual architectures. And essentially the Ruby scripts by every name would make a fat library and like a multi-arch library. And then we would split it. So basically that's another thing why we're rendering these so that we can remove that part. When we build the libraries, we build them in one architecture at a time, use them, build the libcx, build the framework and then, you know, keep going like that. Yeah, but yeah, again, I'm by no means an expert in this field. So feel free to have a look and let me know if something seems like it could use an improvement. Yeah, no, I'm not an expert. It infuses the hell out of me. The only reason I started doing this was because of the issue that occurred with the compilation, the compilation issue where the, yeah, basically the 0mq compilation issue that started occurring when the Rust compiler got the version of the Rust compiler got as a result of the message is great getting integrated. And like the CI would run for a long time and the whole troubleshooting was a pain and I figured that, you know what, what if we sped this up a bit because it would benefit all of us and all future PRs. So that's it. Yeah. Oh yeah, that's that's great. Awesome. Also just for information sake, building phone x86 simulators and iPhone isn't really a problem because you can still open up. If you're an M1 you can still open up Xcode in Rosetta and it still runs everything fine in x86 simulator mode. Sure. Okay, so it's not some sort of version thing or not that important. Yeah, it's just a little detail that you got to remember to open Xcode and x86 if you're using a simulator. Right. Let's go on the next item. So we covered the iOS refactor, the fork, and then we have the bigger stuff. So this is something freshly cooking right now. Again, Bogdan, you can leave a few comments here or maybe you can share a screen and I kind of go through your POC. Yeah, I'd actually love to talk about this. So let's just go some stuff here and share my screen. And if you see something, anything, can you see my screen? Yep. Demine zoom a little. Sorry. Yeah, I think, I think you have plugged in monitor maybe it'll be better if you unplug it. Yeah, just a sec. I'm just gonna switch the screens. Yeah, it's a nice ultra wide monitor looks like. Yeah. Cool, that's great. So let's actually start with the ledger. Yeah, so basically, I was thinking about looking at all these portraits ledger and wallets. And kind of thinking of how we can make them better with the initial effort coming from my idea of kind of removing that profile wrapper, which seems kind of, honestly, it's a bit annoying to work with because you carry all this stuff around with you. And they're not really needed all the time. And we basically had a lot of talks about this, these things and kind of realize that we could split these into some multiple parts, which will make more sense. So for instance, if we take the ledger trade or the base ledger as it was called before. There are a lot of like when you want to do an implementation, a lot of a lot of customer cord or, yeah, customer code or agent implementations would only be interested in ever just reading from the ledger, not writing. Basically, a type of implementing just read operations would be much simpler and slimmer. Whereas, if you do need to write on the ledger and you need additional details to come with that. Then, okay, there's no problem but you can basically add that as an extension rather than having it implicit and requiring all the information just to do a read on the ledger. So, yeah, essentially the idea would be splitting those into based on this type of operation. And then, given the issues that you created about having stronger types were better defined types for the essentially the arguments and the return values used in the trades. My idea is basically to just use an associated types. And when you implement going to be an implementation of trade, you can basically just, okay, define exactly what type you want to use. Let's say you do a indie ledger, and you implement ledger read on it. So you have for the type schema, so indie schema, so indie credit and so on. And you can use these associated types as return types and arguments and so on. And having like dependencies between trades or trade implementations like that would mean that, for instance, it seems sensible to say that if you want to implement ledger write, then you should be able to read from the ledger. So your type should be from the ledger read. As a result, you can use the associated types for the ledger read here as well. And even the annual credits one. Now, another idea that Patrick really just had, I just directed this like 10 minutes before the meeting, but we could essentially split the annual credits straight into roles where role-based operations. So you could have just a bar fire with its methods. And the prover, the niche where I don't know if there's anything for folder. And those are like, again, really, really basic, quickly drafted. So quickly draft a code so don't take it with a grain of salt, but it's just about relating the idea. So, for instance, you have implemented nano credits verifier, you want it for a specific type of ledger. So that would mean of your ledger would essentially at least implement ledger read, right, depending on, maybe it doesn't make sense here, maybe it needs ledger write, I don't know. And then you can use these same associated types and propagate them further on. So if you have an in the implementation of nano credits, you will be obviously using the in the ledger, which passes in associated type. And that's pretty much the idea. We have the same wallet. I don't know if it makes that much sense here right now, but essentially just having some associated types. So this way we can make them, you know, ledger agnostic and implementation agnostic. So, um, similar to what you use for credits, right, you wouldn't even need to deal with the other, the other roles here, and the indie stuff to, you know, even try to emulate that or just market or anything or throw barriers because splitting things like this mean, okay, you implemented the credit stuff, but only for, for pruger, and that's it. So the ledger type would also go here, of course, again, this was quickly drafted. I don't know if there's anything else that would be added. That's pretty much the idea is a very basic concept. And that's, that's pretty much it sort of rolls to kind of use well defined or at least better defined types in all these implementations. That's, that's really cool. Nice. And I think, I think splitting the traits by roles is definitely the right thing to do. Because it is pretty likely our credits and non credits is has like unimplemented methods for like half the stuff. Same with ledger using indie video. But then this was Patrick's idea and it's right. And I guess it makes sense from consumer consumer perspective too that if you want to do your own implementation, and if you only want to implement part of the protocol like your agent is only going to be a prover, then that's fine. You don't need to mock the, the rest of the trade or implement or throw wears or implement some more or just done and stuff, whatever, or even do like proper implementations you can just choose the minimum of what you want to do, and just stick with that. So yeah, also, go on, go on. I was going to say one part that's confusing me a bit. Let's say you were to implement a non creds prover in using credits as the implementation base. Right, you have to marry that implementation to a specific ledger read instance. Yeah. Would that be a problem. Yeah, I'm not sure I mean not it's not it's not an instance it's like a type. So it's not. It's not like one if you have two credits ledgers of the same type. You can use any of them. I think I mean, like, I understand the, you know, the associated types you like for pitching for the ledger but then like, yes, basically you will end up with like two instances of like this. Two instances, each implementing like read for different ledger right returning like different time. So I feel like, I mean it's a major of maturity of the non cred specification like and like, but but I feel like regardless of the ledger in the end you should end up like with the same shared structure like on the cred schema. It should be it should be pretty well yes. I agree. And then it's not right now. Well, yeah, and then if you could right then then this for example unknown crates very far wouldn't know about any kind of ledger differences it would just take that like scheme. If you've had right so if you had like a well defined schema that absolutely any implementation would return and use that you wouldn't even need the associated type you would just use that because it's well defined it's one structure, and you don't need that. But given that everybody's pretty much just referring to their own use cases and needs, then you cannot really rely on a common implementation so the easiest way to kind of accommodate that kind of stuff is pretty much having some, you know, some type balance like this. I don't know it just like right now, like right now the associated type doesn't present any advantage because we don't have any other implementations and at the moment where we like actually start adding some like additional ledger let's say I saw that there's the there's a on the credit schema for Cardano, for example, so let's say you want to add Cardano as a long crates registry or schema credential definitions, stuff like that. Then at that point, I believe like it should be possible, you know, to find that shared baseline, like the specification for unknown crates would mature in such a way that we can have a ski define a schema structure, which would fit both the one from the one from, you know, resolved on Cardano and the one result from India. Right, but I don't think we know that right now. I, as I said, I completely agree the ideal solution would be that all these things would like everybody would agree on a structure and specification of these models of these data models. And these objects so that they would have a matching set of fields and data types and everything else doesn't seem to be the case. But actually, I think I mean, it should be already, or is it, is it not. I'm not sure if we just look yesterday. Just looking at the spec right now again. Just wondering schema. Okay, there is some to do in the spec. Yeah, I don't know. I mean, the one hurt to use the associated types right now there will be only basically one. Like this stuff, I guess, and maybe one version of it right in this stuff. So, but I'm just questioning like, if there's going to be only one right now. And isn't it isn't better to just like hard code, for example, this unknown credits, you know, just hard. And the way it was with the in these structures, you know, and then once they actually have something to add, then use the associated types to enable that if it's needed, if it's not possible to create the shared. Like, if you make it this way from the get go, then the types are already here. And let's say, let's say that these would all converge to the same, you know, same types definition at some point for in the and whatever else is going to be out there. They will have the same scheme of the same fields, same provincial definition definition, and like the same specification for all these types, then you can just use that one type in all of them, you know, as a quick and dirty way of getting all of them on the same page. And then you can like refactor and use the the actual single type in in the, I don't know the turn. Yeah, the return, return value and arguments and all that, but until then you can have different types and without anything. No conflicting like that. So that would be, that would be easy to do right now. And like, removing this would be easier, but adding it would be much harder. All right, and I'm not necessarily like strongly against I don't see. I personally don't see that much value in it right now, but I don't mind to do it either and won't hurt it's not like it's a lot of cause or anything. I think like the point was the point I think of this, the whole idea behind it was to be agnostic over stuff in some way. It's not the ideal way, but it was some way. And when the ideal way appears, we can easily get there. Right. So, sorry, just to confirm an implementation of, say a non creds prover would couple it with a specific instance implementation of ledgery. That that'd be right. Right. Yeah, it's basically a type balance not an instance. Right, but like you'd be coupling a non creds prover to say indy VDR ledger, for example. If you were right. And just careful, not even in the VDR like ledger, like just in itself, like there'll be no difference. Like the weather use like VDR tools or indy VDR for the ledger reads, you would still use the same in the type. But if they you if you were to add like head era, for example, then yeah, I guess. No shuffle the cards a little bit. So, the idea of binding this is simply because we want to use these associated types right that's that's all that there is about and I don't know if it's like, if it sounds wrong to have like this verifier instance. Kind of tied a bit to the ledger type. But there's really no other way other than just hard coding that if this is working with the in the ecosystem, then it will accept the indy schema or indy credit or indy, whatever. And then you just basically have separate implementations, regardless. Correct me if I'm wrong, but I believe like this will actually require minimal like minimal changes in the consuming code, isn't it. What do you mean, like the code which is, for example, calling very far, you know, verify prove that I wouldn't be needed changes because because assuming the type in the consumer code the type will be the will be the same. So this will be a schema or whatever we use. This is just an abstract type definition but it will be matching up so yeah yeah so ultimately the types will remain the same because your code right now it's more about preparing for you know for the future and for other implementations, because the indy stuff is so like we use indy and we use indy types, and these are basically going to match to that, and they're going to match because we buy into them. Yeah, do you have any, or do you think George you have any concern about this. I'm just trying to think about what an implementation would look like. Yeah, so it would pretty much look like this, so let's say, let's say we do. Okay, we need ledger too. There are all the methods in here. Okay, but yeah so the type should be like a schema so I don't have an I know where it is exactly but let's assume. And we will do that for types and then implement the methods so everything returns the proper thing right. And then in here, you basically do do that and like this will never even take an instance of a ledger itself it's just about. We use the associated types that I define here. Yes, but more so that now indy in the indy example it works fine because indy's already all coupled together, but something like credit creds. I don't know if that should necessarily be tied to a vdr ledger implementation. What would it be tied to like what what ledger does that look it up. Does it still use indy or like what kind of ledger. Well, well, hmm. Or what, I don't know, like what's the difference conceptually not that the type level. Um, so when you're saying indy ledger and indy and non creds verifier, I assume you're meaning like indy SDK or vdr tools implementation of ledger and non creds respectively is that sort of. It can mean it can mean anything. Yeah, so let's say that traditionally. Okay, I see what you mean let's say I know that this is basically indy vdr. Let's say it's the same type for credits and just use a newer in this stuff right the module creates, you know, just to make it I think this can be like just a few small changes will reflect the final state be closer. Let's do, let's do like one more info. Let's do now we have like info ledger read. For indy vdr ledger, let's do ledger read for like, you know, vdr tools ledger. And the point is here that in both cases, you will essentially share. Yeah, you use. Yeah, you can share the same text here if you want. Yeah, no problem. And this can work somewhat differently than this. And then here, the actual types here would likely be taken from the unencreds create. So let's not call it indy vdr. Let's call it like an unencreds colon colon schema, for example, you know, in the ledger read type. Okay. The type schema, you know, type schema equals. Okay. And let's call it like an uncreds colon colon, you know, schema. It would come from. Yeah, let's say they come from that module. And this is. No, no, it will actually probably be the same thing. I think these are the same. Okay. Yeah, because it's both like same ledger, right? It will have the same times for implementation. That's what I was confused about because I didn't really get what George was referring to. So, yeah, and then here, like, let's say you want to, you want to do the vdr tools uncreds. So then you bind it to the vdr tools ledger type. And it still uses the indy types because they're common, but the implementation like the method implementations will be different if you want to refer to that. If it's needed. But let's say like, I don't know, maybe a couple of things are different, but let's say that. I don't know the vdr, not maybe this other things has an optimized version of the credit definition. So maybe this one is different. You can mix and match them however you want. It wouldn't be a problem. And this would apply like for each. Right. Okay. I think I understand. But yeah, it's still, it's still tied like a non creds is now tied to a ledger type. I think that regardless of what you do it will be tied to because it uses ledger. It needs some some types that are defined in the ledger like even if you do the other way around and say that. I don't know the ledger is tied to an uncreds to an uncreds type it's still the same thing like you have to have these common types because they both work on the same with the same definitions right. Like there's no way to untie them. Right. Well, other than defining our own structures for those types. Yeah, but then, how does that help anyone like someone coming from like if someone wants to implement to use the right up the cardano implementations let's say, or header implementations and they cannot use our types. If there's no like common specification and we'll define spec which we currently don't have these things. So they would have to build up their own things and use the right. I think, I mean, after all, like, maybe we don't like what's the argument against defining custom types, you know, right now, and what's the car we're going to follow. Like you don't need to and you don't to follow any spec you just like, okay, let's you do you create a schema type on. From. It will contain. It will contain what information we have we just have the indie stuff right. Yeah, it will contain the important pieces. I mean, because the baseline wouldn't change like, like in the schema in unknown crates created like indie types crates, there's a schema like definition. And it contains like all the unknown crates stuff, plus some extras like this, and it's number or something like that. Obviously, sequence number won't be part of. But it is part of the thing. It is part of indie. So if you make a common implementation of only the common ground between the, you know, the schemas that are out there. You are only going to have a half baked solution, even for the indie ecosystem, which we need is like, if the sequence number for many is used in the indie ecosystem, which is probably why it's there that we're not going to have it here. So we cannot use it. No, it's probably. I believe it wouldn't be like, like, you know, this, this structure. That's just an example though. Like, because credit is all, you know, I don't credit itself like it's fairly like, I mean, there's to do, but I don't think it's been around for a couple years and they wouldn't be like significant changes like they are preparing on crates to already but something else that we don't have to to concern right now. But I believe that that it would be possible to to come up with these structures where you yeah just remove some stuff, you know, from, from what the indie times declare, and then have these types of inputs to the unknown credits traits, for example, But I still don't get why you would do that when you can. So you're basically tying everything to a single set of types when this basically just let you have one ecosystem, like you have the indie stuff. And yeah. It forces you to like funnel the whatever you resolve on other ledgers into like funnel it into like one, like just take out the important. Yeah, but why is that better. It's like consistent like you only have one. It's more limited. Yeah, but the consistency, what I understand the consistency is in the ecosystem, not necessarily in. I mean, right now, ideally, yes, it will be amazing to have a common scheme across multiple legends. But right now, each ecosystem really only cares. And they are more concerned about having their stuff working right in their own courtyard, right, rather than, but they still do, they still all do on credits the important pieces are for sure the same. Otherwise, it's not on credits what they do, and if they don't do on credits and we don't care. Okay. So we're just coming from Indie so we're kind of like, I know transitors like, you know, the right. The other implementation kind of follow, you know, what what Indie and on credits kind of how it set the road. They have different schema like significantly different schema then it's probably not on credits anymore. But then why not just use the types from Indie, like everywhere, instead of defining our own. Because the types which are declaring Indie types create. Open some extra fields, like sequence number which, you know, is not is not really related to unencredited. Like, they put, you don't really see the contradiction do you like, Indie is driving out of credit. But then the end the in the end of credit implementation is having some extra stuff, then based on which is derived from is basically started. Sorry. I'm saying this avoids all of these issues. You know, there's no, there's absolutely no problem. And it's easy to get them out later. But if we start off in the wrong way, and realize we need these, then it's going to be much harder to add later on. And I don't really see the problem. I mean, I'm just bringing story and don't get it wrong, like, I'm just trying to make my case for my opinion, you guys might be right and I might be wrong. I don't mean it's possible. Maybe I would like to take deeper look at my life. I have an impression that George is also leaning to like, or maybe also based on the comments you left in some of the red issue threads we had. Yeah. I don't know I'm trying to make an opinion, but I'm struggling to. It feels like it possible to define a common structure type. But there's obviously a lot of ledges that I'm unaware of, like Cardano ledges and what they require for schemas. I believe that FJ has pretty much solved this problem by having a common type that, you know, they're in dsdk implementation and also their, you know, credits or indy vdr implementations use they'll use the same types or take in the same types through their interface. TypeScript doesn't have these cool associated types. So, I guess they couldn't leverage this even if they wanted to. So yeah, I don't know. Yeah, let's AFC implementation and thought they did pretty well with defining common types and it seemed to work. But there's a lot of unknowns, I guess. Let's maybe cut this out. Yeah, let's sleep on it a bit. We don't have much time and I feel like this could be still a richer discussion but at the same time I feel like maybe we're not really getting into it. And form our opinion. Because I'm not 100% confident in what I'm saying and you might be right or maybe I'm right, I don't know. So let's yeah, let's sleep on it a bit. I'm not 100% confident in what I'm saying either, but thanks for showing us that was really cool and a cool idea, I think. But definitely the splitting out. I think that's a good idea and just typing it in. Well, whatever type is better than string and then maybe these associated types. Let's discuss it on discord. Also just just throwing it out there. I still think that wallet based wallet should define its own types because there's so little types and they're very simple. But that's my two cents. Yeah, I've heard enough I don't really know like that seems to be the simpler thing. The more complicated are the ledger and the unknown credits though the wallet seems to be something like there seems to be a record. Maybe but I don't know if that's something actually worth having a separate, you know, separate types or if it can be separate types of what the mission was there so I agree I don't really know. Okay, and we have a last item here in for the progress and that's the web implementation so that's based on top of the idea resolver PR just kind of a further battle testing the trade the design and kind of an example so we have at least like two different implementations for the for the resolver so this will be two and I believe that this year we will be definitely adding the edit in the method which is not yet supported but in the ledger but as I know it should be with the upcoming update of the indie notes. So, yeah, there's this PR. It did not have yet look at it but it seems like the web method is fairly simple. So, you guys, so for you to have a look live review. And yeah, that's that's it. I don't have much time so I'll just go through this quickly kind of looking into future. The priorities this is pretty much like our it's going to be merged. And these two, the, the, the new kind of approach testing in types they pattern highly related so I know George you've been working on the on the whole on the holder right. Yeah. It'd be good if I could show something if we have a little bit of time. I think, I mean, I think we can stretch both on your family debt. All right, and go ahead, church. Cool. You can see. Yeah. Cool. So, yeah, there was that discussion I started around soft failures versus hard failures. And, you know, cases where they can be, you know, an operation that has a soft fail and a hard fail. So, maybe for example, for example, I think it was one. Okay. Oh yeah receiving an issue credential. So, it can fail in two ways it can fail if the thread ID of the message doesn't match or it could fail if some sort of wallet or network fail. When storing the credential. And so I guess I was considering thread mismatches to be a hard fail. Would you guys are they. It's not a hard fail. It just means that the message isn't suitable. Is is what sir. I'm saying that it just means that the message you receive isn't suitable for this state machine instance, because the threat doesn't match right it's not the expected one. But, yeah, it seems like. I don't think this would be something that's unrecoverable like let's assume okay the third ideas it doesn't match and you send a problem report to the other party and they magically fix it and send you the message that then you can advance the state machine without a problem. Right. I feel on the other hand like, like if the if there's a if if if you receive. If you try to update the state machine with a message with doesn't match thread ID. And it's a likely actual the issue in like the receivers code not really the sender because because if sender sends wrong thread ID. Then you wouldn't be. I mean there's not even a way for you as a receiver to, you know, assuming you have multiple conversation going on. You don't even know which like state machine you should match it up to like you shouldn't even. You shouldn't have an yeah but that's basically that's basically the job. That's basically the job of having some sort of a threat controller so that stores you know the information and maybe some idea of the state machines that that are ongoing right. And so when the end points right so when you receive let's say you receive some sort of message for request on a certain endpoint, and there's some ID there. Then, you know that okay that ID matches that thread ID so it's from that conversations. So you pass that, and then the thread ID gets matched and apparently it's a mismatch. Like, ultimately, I don't see it as a part failure at all. I see if it's just okay the message is wrong, regardless of the message or other better better said the message is wrong for this state machine instance. Whether the state machine instance is the wrong one, or whether the thread ID was incorrectly placed there. I don't think it really matters. Ultimately, if there is if there would be a match between these two, like a new message comes in using the same state machine instance. Again, magically the problems would get resolved. There's no, there's no issue, the state machine can continue. A hard failure would be something completely unrecoverable, which essentially from from the protocol schema as I see as being like those diagrams. Sorry, I see it being like some some finite finite states where some actual condition. And regarding the protocol moving forward that that sort of a condition is not met. I don't know. Like assuming you want you and or you send. And we're a verifier asked you for some proofs that you simply don't have like there's nothing you can do ask that. Right. So you're not going to you're not going to magically get these proofs or get these credentials in between this message and the next one. And if you do then they can ask you again but the current state of things. I don't have a I think I don't have necessarily like right now opinion if it should be hard or soft but I'm just thinking that if I'm just thinking that it should pretty much never happens. And like assuming you know I'm thinking how like how these areas agent usually operate like you receive some message. And then the first thing you do you're going to like take a look at that thread ID of the message and try to match up a conversation and you match something up. And then, you know, then you are matching that thread, you know, obviously by the finishing year, you found something that is matching, and you update it and this error doesn't occur. Or in other case, the thread ID is wrong, and you will just end up matching no conversation of those you know, so you shouldn't have basically ever try it should kind of never happen that you try to update with the wrong message. And if it happens, it must mean that you received as a as an agent you received a message with some thread ID, and perhaps you matched wrong conversation areas conversation because of the like agent back or something. But I don't know it should be hard failure or or not maybe maybe not feel like it's never gonna. It won't be happening, unless the agent itself is buggy. So it maybe doesn't, doesn't, doesn't matter that much. Yeah, I agree with both of you I think I think it probably should be a soft era but it shouldn't happen in the first case. If the agent controller is doing things right. Yeah, I guess I guess I'm in general struggling to think of hard failures, but I'll keep thinking like an obvious hard failure I guess is when you decline, but right earlier. It's weird to call it. I guess I guess it really depends on what. Yeah, the definition of failure might not be suitable for what we do here but it's more about like an, I don't know, and a not favorable state terminal state I don't know, just some bad terminal state is unfulfilled terminal states so if you decline okay that's where the whole party ends. Same way, you know, like the discussion we had earlier about okay some verifier asks you for some credentials you don't have. There's no way to just get that out of thin air so then again that's where the party ends. Simply because you got the message a mall formed message doesn't doesn't necessarily mean that your state machine became, you know, got into some unrecoverable state. Yep. So one. So yeah, go on. Oh, that's just I agree. I noticed here this holder failed. I think perhaps it should be actually like different state as something like, you know, rejected or all right. Yeah, look, I mean, but isn't that like what's in the diagram isn't it don't they say it's failed or something. I guess it's about. But we were kind of saying but we were kind of saying you know in our like implementation guidelines that we shouldn't, you know, we shouldn't like bundle all kind of negative cases into one single state because then it's hard to distinguish what happened like if you have a failed case for everything whether it's rejected or you receive like an unexpected message and you have like a hard failure then you don't know like you know if you look at the state machine later you don't know is it because we rejected it or is it because we failed. Okay, so I think it will make sense, you know, to have a different, instead of having like one failed state with different sub variants, just have different separate states for different, you know, negative like negative case in areas. I don't know. I don't have a strong opinion on this, but I think having one type with multiple variants would be better. Because what we have now, and for example where was it. It's definitely adding around. So now in the right now it's like even I see. Yeah, because right now, as I remember, right now the way it is, is like it's bundled really hardcore so we have like fine, like finished state, and that encapsulates the success and also failure. It just affected the chairs like the chair. Yeah, that's bad. But maybe maybe having just failed and success, maybe this is reasonable. I still feel like decline offer it's such a like calling it failed is like too much like declining offer is not really failure you just rejected it because you don't like the values, nothing failed you didn't like the offer. Yeah, the protocol failed. The protocol. It doesn't. It just, it's like users decision like all the, the, it was users, it was the users decision to just fail. Yeah, but it's a valid thing I mean, ultimately the protocol is basically the set of steps that the state machine goes through right that's the whole point. And, you know, following these paths, maybe there are more but so following these paths if you reach to some success and state, then the protocol succeeded. If you don't, and you get to some some failed state or bad state and like it's confusing sometimes. Like wouldn't you wouldn't you want to distinguish let's say you're writing like like mobile app right and you are doing holder. And you, you won't. Yeah, but that's what I'm saying so you could have, you could have failed could be an enum, and it could have variants for the reason it failed so it failed because you declined or it failed because whatever happened, but not as individual types, because then you just like, there's no need to fully the other extreme where we have 20 different types for similar things, sort of. So I can see I agree with what you're saying you might want to know so if failed as an enum, you can then see the reason based on the variant of why the failure. Which should be information enough but ultimately they all mean that protocol ended in a bad way. Right. Fair enough. I guess like since you wouldn't have any like further transitions from there and it's all like, exactly. Exactly. We're really going to do anything else with it. Yeah, I've seen RFCs use other words like abandoned. Maybe maybe that sounds better. Yeah, I'll read through the RFCs again and kind of pick better writing and state transitions. Cool. I was going to show as well. I mentioned sort of ad hoc Lee, a crate called mock all, which is a good mocking live. Yeah, I think I used it as well. And that did resolve. Yeah, yeah. So yeah, I tried to incorporate mock all here just to see what it would look like. And yeah, you can sort of see you create these mock all you define what your mock is. And it looks pretty ugly, but you wrap it in this macro and then it creates mock types. I'm explaining something you guys already know. But yeah, and after you have your mock type, you can say what method call you're expecting, like get credit. And then I'm expecting it to have two parameters and these are the values of the two parameters that I'm expecting. And then I want it to return this. Right, so that's the ledger and then similar thing for a non creds. I'm expecting this to be called once with all of these values and then return these values. And then this lets you like unit tests. Some holder call, like prepare request and it will use this mock data and then you can make assertions that the mock data was passed around correctly and handled correctly. And it's also great for, you know, returning errors. So you can test all the edge cases and get in every corner of the code. Yeah, the camera also forces you to think in terms of trades, right, because this thing is working on trades, you're like mocking the trade implementation. It works with structures too. It's just like the trade annotating trades basically makes it to implement, like, yeah, create a type that implements your trade. But it works with a lot of stuff. Yeah, I find using structures to be very, like, difficult to work with, because you have to import things and trade the lot easier. And luckily, like, profile or ledger and a non creds is our interface to the outside world. So mocking those really lets you unit test things without leaving the box. Yeah. Right now. So this is the taste we are looking at right now this is kind of guys it's called happy path right so it's how does this work so it is a holder but you're missing the other party so how do you test it or what are you doing in the test actually how far can you get. So here I'm preparing a request message. This will enter the request prepared state. And then since it's in that state I'm able to, because you know we're talking about not sending the message but just having it ready. It would be like a get message method but for right now I'm just extracting the message from the state. And then checking that, you know, the values of that credit request message are as expected. For example, since a non creds returns dummy cred requests. When you call create credit requests, you'd expect the message to have an attachment with the value of dummy credit requests. Things like that. Yeah, I see. This must be running very fast right. There's no is essentially, well there's some maybe. Let's see. Zero. Well, I guess in practice it will be like maybe a bit, a bit longer we actually create credential definition of stuff like make it kind of integration test. The creation of it still. It's not even going to compare. Yeah, definitely. Not to mention that right now because of the, because of the ledger and stuff a lot of things have to literally just run sequentially. Yeah, we have like better mocks in place. We can actually parallelize stuff. So actually like a day. Well, but yeah, so you still have the problem with the integration test right where we actually have right. But we can limit those like have some have some I know more end to end tests like proper integration test where we use the real stuff but we can still also have sort of integration test testing all of this stuff and multiple protocols but just use mocks there too. Right, but I'm thinking also with the one way we could improve we could actually have integration tests which run in parallel. If we if we make sure that we don't like if every test is using its own the ID from like issuer like standpoint, then there will be no conflicts because whatever is written on the ledger or if the ver key of the issue rotates or what not, then it will be always isolated to the one particular tests, each test would have its own issuer. We just have to like improve the testing test infra a little bit to enable us to enable us doing. Yeah, yeah. Okay. Yeah, I think it'll be cool as well if areas VCX core has some good tests to test the real implementations of, you know, credex and on creds in dsdk and on creds. Right, because you don't need the entire protocols state machines to actually test those those like cryptographical primitives. Yeah, so they'd get tested in isolation, and then it makes doing these sort of mock unit tests more acceptable. Yeah, integration tests. Right. And then stepping through state machines. Yeah, I just thought I'd show this. I don't know if I'll keep it it might be better to have a separate MR that just introduces these so they can start being used across code as we want. Or I could just keep it in this MR up here. Either way it's fine. I think it's fine. Yeah, so I mean the whole the whole idea with testing was that it would be easier to kind of refactor the tests and improve them as we're modifying the state machines because you can have like your contained tests. Pretty much just revolving more around the changes that either you did or were recently done to these state machines and protocols. So I'm all for making better tests whenever possible. I don't really care if it's a different PR or anything. I just go for it. Cool. Cool. Okay. Well, yeah, that's what I wanted to show. So now we can go for 100% code coverage right. Yeah. Yeah, you should be. Okay, guys, I think we can reach the ends. We stretched out the time a bit, but it's fine. It was it was pretty useful. Good discussion. And meeting discussions or anything else. One thing which comes to my mind is just something we mentioned last time internally that every V6 core is not really like great name because it doesn't have anything to do with areas. So we want to rename that component. Yeah, fair. Yeah, any any ideas, the names. Yeah, ideas are welcome. Yeah, of course. Well, actually, the easy part is bringing it up the difficult part is coming up with me. Yeah, well, even, even where actually we're mentioning suggesting that might even make sense to split them out as separate crates, maybe. Because wallet has like somebody. Yeah, from the resolver point of view, the resolver, right, you only need a ledger read and you don't need all the other dependencies for like wallet implementation and whatnot. So maybe it could be actually like three different crates for each kind of interface, the ledger interface, what interface interface interfaces. And that also solves the name problem. The name is easier and because it's not like three different things in one component. Sounds good to me. One, one thing, I don't know if it matters too much but it was mentioned to me today that Ursa is the first of the crate has reached the end of life. I don't know if those uses us or I believe in credit X as well. Yeah, so they want to, as I know they want to pull those components, you know, those Ursa components are being used they want to put them into the consuming crates so I assume that and on the crates RS or credit X would probably contain some sort of sub module with probably just previously in Ursa essentially. Right. And video tools, I guess, probably doesn't have much hope since it's a book of a book of a book. Yeah, I guess there's something but they need to, I mean that stuff has to keep working. They cannot just delete entire Ursa or something. Yeah, but I don't think they're just gonna. Yeah, probably just gonna archive the repo or something. And is even integrating that is going to take a lot of time. I mean, maybe not that much but still. Yeah, I don't know. That's the issue for entire community. I mean all the areas implementations are are using that right so yeah, there's there's lots of motivated people to keep the things working. Okay, guys, anything else on your mind. All right, my battery is dying so yeah, it's about time to wrap it up. Okay guys, thank you for for connecting. Pleasure to talk to you. Have a good rest of the week. Have a nice weekend. You too. Thank you.