 Hello guys So in another minute, let's see if George comes in or maybe some other people Okay, maybe we can kick off the meeting Can you see my screen? Yeah, sure. Okay, cool. So Let's So this is the Ares DCX community call 15th of June 2023 now Patrick is away. So I'll train to my best to fill in his shoes We have our antitrust policy notice here, let's Let that in on screen for a minute. Okay, and Now I have been away myself for for a while last week, so I might not be completely up to date with everything But let's try and kick off the meeting. So probably the Hottest new so to say would be the kickoff of the mentorship program We have our two mentees for the the mobile wrapper and the Ares VCX mediator And those would be Swapnil and nyan respectively The program just started so they're they're still getting up to speed and Just introduced into the project as far as I know Yeah, and I guess that's that's an exciting news so After the long selection we've chosen people they started out they accepted the mentorships and Where we're starting to To actually move this and In some direction, so that's that's pretty much it for for the mentorship Really glad to have these people on board And we can have a look at the Recent work done So there are a couple of PRs that have been recently merged the first one which Extends the the profile trait a bit By by Patrick And Omira so I was away when this was done and I'm not up to date with all the details But if you have some maybe you know more about why this method was was required If you would like to provide some way I'm not I'm not too sure myself. I mean I originally try to Get rid of the reliance on the global state From which was to store the transaction author agreement to store and set it and so I removed it in such a way that it is basically set once on initialization of the ledger Ledger struct but Patrick didn't like this pattern and For that we should Resetter it to make it possible to set the transaction author agreement at runtime without creating the Ledger great implementation again from the scratch, which I mean it's it's a reasonable Okay, I see Okay, cool. Thank you. Yes. That's been merged Down the line, I guess our plan is still the same to kind of get rid of this profile thing and There is also some some other work that is gonna facilitate this Again done by by Patrick Basically not passing around the profile through All these places and tests and wherever it was being used but rather Try to use the minimal Where the more specific components of it that are actually required So this is definitely going to facilitate getting rid of the profile as a construct and just passing around wallets or unencredited implementations were ledger trade implementers Wherever we need them So this was also It's it's quite a big PR and this was Mars this week but The it's fairly simple overall. So it was just I guess a lot of tedious work, but Patrick pulled through and essentially would prove Replace the the profile getting passed around with just specific parts of it that are needed Patrick said he found it relaxing. Yeah, fair enough. Cool. It's even better Okay Now this is also something I'm not entirely familiar with I think this was something by George Some sort of fix Something done to basically be in sync with the acupy capabilities I see there's an issue here, so let's see The request you send appears have an empty string as the public keys controller and the dog Okay, yeah, so I guess this has to do with With the did dog construction and this is still using the legacy did dog and we'll be using the legacy did dog So I guess this was basically a bug that just Slipped me in particular because I refactored this and George was nice enough to fix it and Essentially looks like It's just about setting the ID first apparently that's being used in some other places Fair enough well some side effect probably in all of these Some matter of bad implementation of the old at ease the duck Like the order shouldn't matter probably. Yeah, it definitely shouldn't matter, but yeah, it does Yeah But anyway, good good catch from George. So he fixed that. It's really cool. Yeah It's really really great and something from you Mira the DDO facade for Sovereign specific stuff and now this is not yet Merged I believe but I personally approved it. So we had just that short discussion about the Whatever the dummy or the empty set the fields for the service for the Aries interoperability profile But apart from that I proved it. It's all good. I think it still needs a rebase But for me from my side, I guess we can merge it Do you want to do you want to talk a bit more about about this and what it does and what you were targeting with it? Yeah, it's just Like when I was integrating the new the duck into the rest of the code base I noticed that some usage patterns are quite frequent So with this rapper, I just wanted to make it easy to use the the great for our For specifically for the dates off Dates of the dogs which like have various Let's say service types with the various extra fields and Yeah, we are using we are like Freak doing some some kind of usage patterns frequently like retrieving retrieving the first out in key from Service and so on. So yeah, I just wanted with this rapper. I wanted to make it would be easier to use it Yeah, okay, I don't have a Question it just popped into my mind right now. So Maybe it would be worth discussing like for the entire community the fact that we will have to Maybe not necessarily maintain but keep using the legacy the dog Because Switching to to the new one will be very very tricky in the places where the the old one is used and that's particularly because of the unqualified dates now I believe like the legacy the dog is basically for the Eris interoperability profile, right? Like that's what it technically represented Well, the legacy the dog technically represented Like unfinished version of the the dog or did so spec Which changed since then but still like many frameworks use it. So For example, the public key field in legacy the dog does not exist and the new did dog and there are like multiple some fields are Not required in the legacy the dog whereas they are required in the new one and a few minor incompatibilities between like this between between The like w3c compliant the dog and the legacy the dog that we are using in connection right now and in invitations and so on. So Like most of the frameworks still support this old old dog format in connection and so that's that's why we are still using it and Like as it relates to a qualified unqualified dates That's another thing that we are using unqualified dates in the in the did dogs everywhere right now But yeah qualified qualified it looks that's another big like kind of worms, but as it relates to the dogs No, this is this is the reason and I guess there might be a way to somehow make Make the new did you like backwards compatible to Add this public key field for example or handle it and or make make the fields like not The fields which are which should be required make them not required for this backwards compatibility use case But the question is whether it is worth it Yeah, I don't think so And I mean you you were the one that worked on this for quite some time and from what I gathered from your research You essentially said that this is what pretty much all the other implementations have done. So they they're still carrying around the the legacy implementation because Like maintaining backwards compatibility in the new one would be too much of a hassle So I guess the dream is to at some point drop the old one sometime And then just keep using the new one with the new new format But with the differences between them and especially differences in terms of required fields and stuff like that That would just make it very very weird To use a singular uniform implementation one thing that I Not tested is whether Whether if we Send for example connection requests with the new format in the new format whether AFJ or a kappa accepts Accepts this new format because it's quite possible that it that it does and I Have not tested that and Yeah, if it did then perhaps just switching to the to the new format And we might remained like still fully interoperable So we might we might try that except except with older versions of every VCS. Yes That's maybe too much to ask yeah, yeah Fair enough What about this this Like is this being used somewhere the the areas interoperability profile Kind of the dock So this this relate this depends on the on how on like And how the service is written on the ledger if the service written on the ledger contains certain type Like written like specified in it. It should resolve to this AIP one Did doc Okay I think that that one like should just contain like no keys just the service and point I think Okay, I got it I got it For a for a split second there I was wondering whether this and the legacy the dog might be on some sort of similar page, but they're not Okay, cool cool All right Yeah, so that's mainly the the recently done work Like like we said the video facade is gonna get merged in fairly soon And for the work in progress Something from my side would be the the credential migrator or rather the Anon cred specific stuff migrator This is still in progress or rather the implementation seems to be done But I'm not really Willing to say that it's entirely done until all the testing is is in place Now testing this will be Interesting right now my plan for for at least the first round of tests is to kind of Create some dummy items and try to migrate those to credits And then ensure that the types get Deserialized correctly to the newer or the data gets these realized correctly to the credits types Now to be fairly honest the entire migration right now again before too much testing Was pretty straightforward much more straightforward than I anticipated And I kind of feel I'm quite skeptical because it feels like it was a bit too easy Especially because this was supposed to be the trickier part. I guess I Wish George was here because he had some Some more insight he worked a bit more closely with with credits and He did say that he noticed some quite fundamental differences between The VDR tools implementations of on implementation of on on credits and the credits one However, I haven't really they they seem like they're pretty much exactly the same types and the migration mostly just revolves around changing the category types of the credentials and then the plan is to get this from credits to on on credits are s and That's even going to be more straightforward because those are Now again, I didn't really look at that but from our analysis from some time ago We noticed that these are really really similar. So Again, that should be pretty straightforward What do you think about like the other rounds of testing so the like I really want to do the kind of dummy items kind of thing Like I'm not even necessarily interested in Functionally, whether they are correct. I don't like a credential definition. I just want to fill it with dummy data and See that structure wise the old Like if you create a live VDR tools credential definition store it in the wallet Migrate it and then try to pull it out into a credits credential definition if that works From a deserialization standpoint, I guess that should be a good start But I'm wondering about testing this in a more real life kind of scenario Because generally all the tests that we have are Kind of self-contained, right? So, I know you want to issue some credential So you might be generating a schema revocation registry a credential definition you issued a credential and then I believe stuff gets cleaned up Doesn't it? Or even if it doesn't get cleaned up when you when you want to run some other test It's basically gonna It's self-contained. So it's gonna use its own like whatever types and definitions it creates within the test So you cannot really reuse even if you migrate the old ones It seems kind of tricky to reuse the Migrated items, so I'm really wondering how to go about that. What do you think? Sorry, I didn't catch like what why would it be Compersome to reuse the like items created before the migration so I Mean you you can migrate existing items, but as far as I know like the tests are pretty self-contained, right? So you can't like do the migration in the middle of the test No, I mean you can But I guess that would be the only option to kind of have a test run midway migrate Maybe change that profile To use the credits on on creds And then let it continue from there Yeah, I guess that would be that would be one way to go about it Make sure that the created credential definitions are still usable, but you can still like creating revocation registries and so on Fair enough Yeah, okay issues some revocable credentials try to revoke them verify verify that Verify them and so on Okay Yeah, so I guess we need to sort of Kind of identify these I know let's let's call them break points in the In the tests of interest and do the migration there and then change the profile and let it run as usual Yeah, okay, I guess I guess that's the really the only way to go about it at least from a testing like self-contained testing kind of Kind of way All right, cool Yeah, so that's about the migration. We'll see how that goes right now. It doesn't seem to be too much of a Of a hassle. Hopefully it's gonna stay that way I know that it did pierce something you wanted to to work on and You elaborated a bit yesterday. I believe that it's quite a an extensive method And or any anything to add you know, yeah, so I was thinking a little bit after The attempts at integration that we are not using qualified it's Either as an issue or in in the connections, so We are basically using unqualified dates across the code base and we are not typing our dates basically at all and So I created an issue for this and and Stephen Stephen Karen picked it up and pointed me to Discussions which are going Going on right now in the a kapai community, but they were using unqualified. It's I Mean Let's back up a little bit. There are like multiple parts to this So we are that you can qualify dates when You are talking about is your debts so that's like when when in areas we see extremely create is your configuration you can either create Qualified or unqualified that there and so trying to create We are right now creating always unqualified. It's trying to create qualified it there changes behavior of the VDR tools and Credits a little bit so that the credential primitive IDs which they generate have a little bit different the little bit different format and This of course It's not that big of a deal if you are like I are creating a new issuer and starting it from scratch and issuing like Credentials for the first time, but if you have an application which is built on top of areas VCS and is Has been using a certain an issuer wallet for Sometimes this might create some headache for you. Let's say It might be difficult to Might be Difficult but possible to somehow migrate your wallet locally, but still there are these IDs Which are on the ledger which are already written. So it would require some Reissuing credential definitions and that's not no go. So we would probably keep for this reason keep using unqualified dates for the issuer and that's not that big of a deal because we are Well, that's what we will we will do for the sake of like Saving saving us something and then There is the matter of dates which are sent in for example connection invitations or connection requests and There we are also using unqualified dates and that's what Steven like was talking about when he was pointing me to the did peer Specification and discussions in Akapai and Akapai basically Decided to To transition directly to Did peer numal go to which is a specific kind of did peer and Also did peer thing which is a little bit Newer in terms of when it was added to the specification, but That's what they are planning to prefer and default to Going forward that fall to in the lead exchange protocol and they were also talking about remaining backwards compatible although I have been on the community or seen the community call recording and There are voices in the community which are saying which are kind of speaking against this backwards compatibility Akapai has been always like trying to remain Conscious of the rest of the community trying to remain backwards compatible as much as possible, but it seems that they Lately decided to change this stance a little bit and just give like a notice period Where they will for which they will remain backwards compatible and from then on they will they will only accept did peers So from that's what I gathered from the community call so This means for us definitely that if you are going to implement any protocol it should use did peer two and three AFJ supports it already. They are a little bit of a pioneer in this regard They support all did peers writing all the did peers except for the peer three, which is very new So they support the zero is basically the old did key one is basically Not fully specified into spec and it involves hashing of the did the dog, which is not like Let's say which needs to be somehow Yeah, specified further like the order of the fields and so on because you have to the the hash Of course. Yeah, otherwise, it's gonna Not So they basically say on this is not this is not specified in the specs So let's not let's not focus on it and they focus right already on the did peer to which which is like much better specified much much better. So That's what I decided that I will work on Next because there seems to be this community wide push for for using qualified it and So it's quite important to keep up with the rest of the community And that's probably regardless of whether we go the did exchange Whether we will use it in the did exchange protocol or not, we should probably use it eventually in the connection as well Okay All right, cool Okay, so Thanks for for the info on that Very very strong insight And maybe for our last point for the in progress work, there is again something from Patrick Essentially related to refactoring some tests this seems again like Quite a big PR, but I guess it there's really no two ways of around it since Like the testing in place would pretty much affect every everything I See that this kind of continues With the like not passing the profile and restricting to smaller smaller pieces And I believe that from the description Mentioned something about kind of getting rid of some of the setup objects Um Apparently something about creating a favor agent for testing similar to To what we have for create Alice. Yeah Well Patrick has been out the entire week, so again, I'm not maybe you know more you're out. I don't know but Fortunately not Well, I guess he'll be able to fill in on this next week the latest so But yeah, there's definitely a lot of work that we have to do on testing and especially with the upcoming Efforts on like even just on on creds the state machines and Yeah, like all these things Start to intertwine Which ultimately I guess it is not necessarily about them being intertwined but rather we we started working on some lower level blocks and Because they are lower level they kind of affect everything on top of them, right? So That's why things get a bit bigger now, but in the end I believe it's gonna be worth it Um You already mentioned for upcoming plans The did exchange protocol I know if we necessarily want to Talk about this now, maybe we can we can chat a bit About our already kind of carry discussion About state machines. I wish George was here too. I was really curious of his comments but yeah, so I'm I'm really I'm really wondering I Mean right now, especially after talking with you I kind of you I'm more I Know settled I guess more content with the decision we made because it doesn't feel any more like I know seeing that you were Um Kind of content with the type state pattern also made me more I Guess it kind of it was some sort of validation But I have been wondering whether the type state pattern for all these state machines is really being the way to go And I guess the main drive point for the second guessing has been I Mean on one hand the lack of work that we've done on state machines But it's not like any of us has been like we've all been busy with all all kinds of things and On the other hand the work that has been put in the state machines and particularly I guess it's only been George with the holder He seemed to hit some like a series of hiccups with that and I couldn't help but really wonder whether maybe Maybe it's simply I know maybe it's not the best way to go about it. Maybe some modern Approach who would have made it easier and Right now I'm really just reiterating what we already discussed, but so Like the type state pattern kind of makes it I'm safer and more straightforward in terms of how this transitions go And I guess ultimately what you want from from a state machine in our context here in the in the Aries context Is that you generally? Like you get you get some some some input some message something and you want to pass that to a state machine now Generally since you provide endpoints that you communicate with other parties you Have identifiers in place so you know exactly what state machines you have allocated for a specific Specific other party, right? So when you get a message, I know you get the presentation request or something like that You kind of know The exact instance of a state machine that you will want to pass that to now The the general outcomes are that If the state machine is in the right state then it's you're gonna continue and try to process the message and If there's no error then you're gonna get you're gonna transition to a new state But if it's not in the right state then That you basically consider the the message where the input is invalid as you were unexpected Or have however you want to call it So ultimately like with the transitions being so far in between and At least conceptually and then Like this this kind of separation of You get you get some input if it's right you transition or you try to transition if it's not then tough luck I guess that that doesn't make it like I know I guess that the type state pattern release. I know fits because of that But Then it boils down to having different types for these states where these yeah, the states of the state machine Kind of makes the need for for these wrappers like we have for the connection and the generic connection wrapper Which is an enum that encapsulates all the possible states and It that that's what kind of makes it feel a bit redundant like if you ever have to work with a state machine without knowing its state You cannot really just do that with With the the different types that a state machine can have because you don't really know the type or you don't necessarily Care about the type so that's why this wrapper is there to basically try it It can deserialize for many of these states and it has some common stuff that you can call on it I'm I'm just wondering now whether These common operations are are a common practice like I know we have some stuff in the VCX You also pointed out we have some stuff in the rust agent implementation which pretty much just runs in memory and The genetic like the generic connection enum was pretty much Where essentially I made it to kind of fit these purposes That we had then And it's probably gonna still stay around to again fill in these gaps But I'm wondering whether do we want to like from from a user if you're if you're a consumer of various VCX Do you want to use? Some sort of one-size-fits-all state machine like the the generic wrapper and just cold stuff on it and handle it or Do you want to have these individual types? You know exactly what type you should be in based on the input you receive if you're not in you don't get the right type because deserialization fails then Then again tough luck invalid input or unexpected input or something like that because essentially like the benefit of the state of the The the type state pattern is you're not going to get into an invalid state You if the state machine is not in the expected state. That's because you'd never got to be there. I guess and Maybe another point to consider like we do in live VCX we have some Some object caching So we're even in the in the rust agent because it all runs in memory because generally you want to have some sort of persistent storage because Message sending in Aries is not necessarily time constraint at least not generally So you can receive a message to date transition receive another message tomorrow I know you know in a proof presentation. So it might they might Span across longer periods of time So you want to have some sort of persistent storage generally and store data there retrieve the state machine Try to transition put it back stuff like that. But there's also the question of caching especially in memory and Generally that helps with not having to look up your persistent storage every single time especially if Operations happen quickly which by all means they generally do and probably will But having a cache like that generally means that you might have to To have a wrapper like a one-size-fits-all kind of wrapper for the state machines or Have some sort of smarter cache that stores all these individual types and Separately and then is able to look them up Somehow which also can be done, but I'm just brainstorming right now. I'm Wondering What you think about all this and I'm really wondering what the others think about all this but Yeah, it's just us now. So Yeah, so like as I said, I Don't have much experience with interacting or using the new type state connection But for what it's worth. I mean the usage pattern that you talked about Where like you get get some input and based on the input you basically Usually know or what state you should expect it for that makes Quite good argument for the type state pattern. I think and I think that you actually said it yourself. So So, that's that's that's an argument for and then you were talking about the Generic connection inam That you are basically saying that it is redundant, but I don't Like we do use it applies this we use it in libvcx you use it in the Rust agent and like you said it yourself that there are there are some situations where you don't know what the types You should expect for these situations May be useful to have to give the users the option to use Something like the generic connection. So I don't like see an issue of having it in in it is vcx necessarily It's it kind of combines, you know to approaches Because if you have like a state machine kind of build around inams then You basically have all the possible Methods in that particular inam right and then based on inam variant you do stuff and you check things at runtime and your Transition or you don't and and so on So that's kind of a one-size-fits-all but Like like we've been saying With the way these state machines work and the way the protocols are actually designed You kind of know when you get a message what where you should be and you can kind of determine the state You should be in and you can basically validate Whether you're in that state or not so I'm I'm basically a bit I Know again, I'm really just wondering The whether the whether there are a lot of cases where we need Or whether there are a lot of cases where we want to work with a state machine without knowing its state like we have in I believe there is some some stuff in in live vcx Where I know you might you might want to try to get a dead dog from the other party But that obviously depends on where exactly you are in the connection So you might get a dead dog or you might not now You cannot really because you can get a dead dog. I know maybe you can you have it in four of the five states as an invite invite T But you don't have it in one state You cannot really know like even if you had it in all five states you still don't know what state you're in so You cannot really just get it out So and maybe this is a broader question like I'm I'm most familiar with the connection protocol because that's what I worked on but Thinking on other protocols like credential issuance or even proof presentation Do you generally want to do Like state state-agnosting operations And if you do want to do them, are there a lot of them Because there are no state-agnostic operations, right? Or or what would be an example like So I don't know like for connection. Like I said, maybe a state-agnostic operation would be to I don't know pull out the Counter parties did dog If there is one well there there might not be one so if there is no one Then like just just the fact that it depends on what state you are in kind of means that it is Yeah, but for instance if you're in the completed state or in the responded state you will have a dead dog, but if you want to get it out of of the of the State machine, you don't know if you pull pull the state machine from database You don't know what type to deserialize to do you either deserialize to respond it or to complete it Just to provide that dead dog, right? So that's kind of the the purpose that the generic in the generic connection in I'm serves But it seems like a limited set of things that we use that for and And I guess that's technically fine If it stays like that But yeah, I don't know maybe I mean I know we kind of the two of us kind of exhausted this a bit There probably aren't new ideas But I'm basically just I wanted to mention this on the call on the recording maybe Patrick or George will listen in or maybe some other people and Just kind of be For just mentioning for posterity, but maybe it would be worth discussing this some more but with the other guys Okay All right, so we we talked a bit about this you did talk About the qualify did as well Now I believe the decision about qualify did Maybe not decision, but the idea in the connection protocol is to kind of not use qualify dids keep using on qualify dids Keep using the legacy dead dog, right? Yeah, and use qualify dids from now on There is the CLI demo demo from George that's in progress And it's basically I believe similar to Existing examples and I'm a bit familiar with the occupy one didn't know there's one in FJ But I guess it's pretty much just for kind of showcasing the the roles of agents and Aries through a CLI App and rust. It's pretty cool. I Mean this actually this is just an issue Yeah, so but I guess it's a It's a it's a good idea I know if George wants to pick this up and seems to have put a lot of details into Into the issue so Doesn't sound far fish that he might he might start working on this but And there's also the test harness update, which I guess it's kind of overdue Yeah, I think is there George mentioned mentioned on the last community call that he would like to Start working on on the update a little bit So I don't know if he did start it or whether he did some investigation with East or We don't have him for for an update okay, cool and For the upcoming priorities Especially what we've been discussing so far so kind of splitting the The ledger not the ledger primitives, but the Aries primitives Having separate parts for ledger on and credits and wallet now The unencredits part depends on the migration and the unencredits RS implementation. I Know you've done a lot of work on the on the ledger part Also splitting the trait into multiple ones. I Don't know exactly if we have any plans for some sort of wall of migration for Oscar, but Like it's pointing out here it does kind of rely on the unencredits RS thing to be done because I guess it kind of defeats the purpose otherwise well Maybe they're not that related. I guess you could technically use any wallet, but Ultimately if we want to migrate to the newer stuff we should do it iteratively I guess We I guess we again talked about splitting on and credits RS I know that this has been brought up on a number of calls before kind of splitting the trait into roles specific stuff to again provides more bits and pieces wherever they are needed and type-state pattern is I Guess an ongoing discussion again. It's not it's definitely not a bad thing But and I'm I'm also surprised that I'm kind of second-guessing this but like I like I told you in chat if If by any means we realize at some point that you know what we took the wrong approach Let's switch then I'd rather have it now Like the sooner the better kind of thing. I don't know. Yeah Yeah, maybe talk about this a bit so I've seen that this is basically just I know some Vulnerabilities being pulled on discord I believe By some bots and the test harness for one Yeah, I guess this was We should get Partially or entirely fixed from what I've seen like that the failures brought up were like the ore boros crate Which was not it actually used and you mentioned something about the cargo lock file being pushed that was not Necessarily representing reality. So maybe that's what it looked for Critical vulnerability is the in the failure which we actually do use that's used in the video tools So, yeah, it shouldn't be any more actually at least not soon and soon enough. Yeah Essentially part of the credential migration thing because it it I had to kind of tweak the video tools to kind of accommodate everything And I also modified some of the error handling in there to kind of not depend on failure Yeah, and to also kind of reuse the errors from indy because From from video tools because we kind of depend on them for the conversion, right? So At least that the wallet errors because that's what we're interested in and when the migration is done The plan is to kind of drop a lot of video tools content and pretty much just Kind of keep the wallet part but drop everything else So If not implicitly if we don't get rid of this implicitly then I can put in some work and getting rid of it No problem, but I think it should we should be able to get rid of it Um So this is this is Just I'm not like entirely sure what this refers to myself, but Stephen Karen invited us to a Conversation or discussion around Out mediators next week on areas we say working group meeting So, yeah, they invited us to the meeting where they should discuss Creation of a mobile bullet friendly scalable mediator, which kind of relates to our internship projects Perhaps yeah Yeah, the one that Nayan Yeah, cool Okay Well, we did it pretty much exhaust the entire hour. So that's It's kind of cool. Even if it were just the two of us Um, I know any other thing you want to add, you know Nothing for my son Okay, cool. Well, then thank you for tuning in and Entertaining me here in all the chats Very very cool discussion. Thank you for anyone that might be tuning in later and That was it for the call. See you next week Very much