 Hey, Patrick. Hey, good morning. So George is not joining in today, but let's give it a minute or two, see if anybody else is gonna join and then we can start. Okay. All right, well, let's get into it. So, I'm sharing my screen. Can you see it? Yep. Right, so welcome to March 16th, 2023 RSV6 community call. It's just two of us from Upset today. So I'm not sure the antitrust policy is notice is needed. There's no competition involved here between two of us, but nevertheless, just in case, I'll still follow the standard procedure. So this is the antitrust policy notice for Hyperledger. And I'll just keep it displayed for a short moment. All right, so let's get into the agenda. I kind of flipped it around today. And instead of leaving the kind of discussion points at the end, I just put it in the beginning. So the kind of like interesting announcements and interesting discussions can happen in the beginning when people join and then perhaps the overview of the working process and that kind of things and the specific task and implementation details might be more mundane to some attendees. So I just left it basically for the second half. So to start to kick this off, I would like to just announce first mentorship program which we applied for. It's a program by Hyperledger. It's been going on for a couple of years and we have submitted application for AriesVCX. There's two projects for AriesVCX, one being the basically Uni-FFI mobile wrapper. So that will be the Livvcx substitute. We are looking for Menti who will be willing and would like to work on this task. All details are included here. I know that there's at least one person already interested in working with us on these tasks. The other task we have submitted for review is implementation of Aries Mediature built on top of AriesVCX. They'll be useful for entire community and at the same time it'll be just interesting application of AriesVCX, maybe drive us to improve some APIs seeing how it's being consumed from actual application. So these are two tasks of us. Now it's pending for review by Hyperledger committee, technical committee, I believe which will decide which of these projects will get accepted for the program. So yeah, we'll see that should take, if I remember right, until the end of this month or so. Next up, I have a yesterday created bunch of good first issue type of tasks on GitHub. So they are tagged with this good first issue attack. And basically they are very simple. But the idea is that the people who would like to contribute it can be very helpful for these people to have something really laid out for them and really well described, maybe even though it's simple, we should make it really easy for newcomers to follow the instructions. So I think it's a good idea to make it really easy for newcomers to follow the instructions. I think what I've done here could be still perhaps expanded even further, but it's very simple issues. So this could help. And I would like to just encourage anyone to create more of these issues. Just, I think it's important to make sure they are really specific and concrete so that the new people don't have to understand the entire context of the project and all of the refactoring going on or they don't have to be necessarily experts on Rust. So I think this can be useful initiative for just getting more people, even complete outsiders into the project. We did something similar in October when it was October first and we had the one person or one or two people who actually send the PR, but at that time we didn't really have those good first issues, I believe, written out that specific and maybe it wasn't easy to begin with. So yeah, I just thought that we should come up with some sort of guidelines for these good first issues and the kind of guideline I was trying to follow here is first, just make a brief description what should be done, then like describe the task in a few points and then also include what will that person implementing this kind of learn so they know what to expect or what kind of areas of Rust they'll explore. So in this particular task, they will have to deal with the trades a bit and mocking and testing. And there's other tasks which maybe dominates different like areas or like requires different type of Rust skill. Well, if there's no further like comments then I'll move on the next point. So that's a more mundane and standard part of the meeting. So the overview of the work recently done. We had, I saw that we all start from the bottom of the list this time around. I saw in the morning that George has pushed new updates to his PR about making VDR tools basically optional and make it enable using feature flag to essentially opt out of VDR tools and instead of use and instead use the smaller library components as a dependency. So this is still in draft. I'm not sure what's missing here, but it's passing. So we'll need to update from George's too. If this is ready or if there's something yet to be adjusted. The next item which has been mostly done but is yet pending for review is work done by me. This was, this relates to changes of proof verifier API in particular the methods around like figuring out the status of the presentation, revocation and like every state. So we previously had three methods and it was, and one of them was bit convoluted and didn't really add any informational value. So the function, that function I'm referring to was called presentation status or maybe get presentation status in some setting. This was removed, essentially. And like, I guess it depends how you look on it. Like the implementation was like deleted. The way I proceeded is I deleted this function. Then I took the function get revocation status and essentially I renamed it into get presentation status. So you could argue that the presentation status function was not deleted, but it was just a largely changed. And it now reflects what originally was called get revocation status. So it's mixed up. But the end result basically I described at the bottom. The end result of this PR is that there's two functions. There's a get state which simply returns the sort of Aries state, the state of the state machine itself of the protocol and the second function get verification status, which can have three values. The presentation was either valid or it was invalid or it's unavailable because we perhaps didn't receive it or the presentation was not verified. So these are three results are possible. And yeah, originally there was one more function which essentially combined these two together and it didn't really make sense. So these were simplified, but it is technically like breaking change. I mean, the implementation of what was called presentation status function has changed as well as what has changed are some of the internals of this verifier state machine. For example, we previously stored a field called revocation status in the final state, the finished state, which was now renamed to verification status. And also the values which this field held was either revoked, non revoked or it was actually option, so it could be none. Now, instead of that, this new renamed field has different set of values which is either valid, invalid or unavailable. However, it's made in a fashion that this PR is still backwards compatible. So you are still able to deserialize the old state machines with these changes. However, when you perform serialization, the serialization will be now like new format. And so this reading the old serialization format basically would just before this PR that will be eventually dropped in release 0.54. So what is expected that you should do if you are using verifier-proof state machines is that with release 0.53, you should essentially realize all of your state machines again so they will be saved in the new format. And therefore in a 0.54, one with this backward compatibility, this serialization support drop, you will not run into issues. So you need to re-serialize your proof, verifier-proof state machines. It's quite a bit, but I try to describe it really well in this description as to all of the details which have been changed. Does it make sense? It does make sense, but I'm wondering a bit. Sorry, I was speaking too loudly. Okay. Yeah, so it does make sense, but I was wondering if, like this implies some refactoring and adjusting stuff and all that. What about just directly, because the plan is to ultimately get this implemented is the in that type state pattern. So why not just go directly towards that? Yeah, well, basically, this is kind of preparation for it. I just wanted to make it, firstly, I ran into this because of some internal implementation outside of various VCX. As we can run into, realize that the API doesn't quite make sense with the three functions we had before. And I saw that the way it was convoluted, it will actually make it difficult for whoever's gonna be migrating this. It's gonna actually make it difficult to understand how should it migrate it? Why is it three functions and didn't really make sense? So, and obviously this is like a smaller effort than if I would do the entire rework into state pattern. So I just kind of, yeah, iteratively wanted to update this to make sure that the stuff we have now makes sense. So it makes and it would be straightforward to actually apply the state pattern refactoring. So then it would be like, there could be less functional changes and it would be more of a structural pattern remake, I guess. All right. Hmm. Okay. And actually, yeah, there's a, I put a discussion point on this, like following up on these changes as I have... I've actually seen this in the beginning and it's one of the reasons why I asked because stuff like getting the state would no longer be an operation that's done necessarily. It might be on something that aggregates all the states of the state machine, but for the actual type state pattern, you no longer have a runtime check of the state because you know the state you're in based on the type that you have. So that's pretty much why I asked, like it looks like this poses some questions that would need to be resolved. And then even if we resolve those, then later on they're not really gonna matter anymore. Right. Well, the changes which has been done, like they don't really, this is a bit, like, you know, this is something bit else, just something, you know, like additional questions that are raised. While doing this and also based on this discussion we had before in George's PR. And yeah, like, yeah, you are right, the get state function I mentioned here will not exist anymore, but I guess I was, still there's like, I guess it kinda applies, maybe a bit less. Like, well, I guess it's more of a obviously we see X question in the end, I guess. So maybe it's not worth it to discussion, discuss this part. However, this part is still relevant. So, you know, right now, you know, we have states, for example, in the credential issuer, we have state credential sent state and then we have finished state and we don't have failed state, yeah, right. And so the way it is, so first of all, the credential sent state and the finished state, the only thing which differs between them is that, or the way you transition from credential sent state to finished state is that you send the credential, then you are in a sent state and then you receive acknowledgement message from the party, then you go to finished state. And the question which takes to ask like, is it even worth it to have like this as two separate states? Would it make sense to just have one, I don't know, credential sent state or finished state and have a flag, you know, acknowledged in it or does it make sense to actually separate this as two things. And I'm not sure it's kind of open question. If you look at the RFC and how they designed it in the RFC, like, well, this doesn't really talk about states though, it talks more about the actions. But I guess it really just boils down to that. So when, once you send the credential, can you do any sort of action? Once you send credential, you can just wait on your acknowledgement. I mean, yeah, but I kind of see them as two separate states because it's about like, that's what the state machine is ultimately about, just tracking state of different things, even if something, like even if nothing internal has to happen and it depends on an external event, it basically keeps track of where you're at in the flow, so, because otherwise you can practically just emulate the state machine by just having a big structure with a lot of fields and just have some flags on them and just do a lot of this sort of stuff like that. But the overall idea is basically the separation of concerns. So I kind of see them as different states, even if they're not necessarily different, like structurally. Yeah, yeah, right. I guess that makes sense. But then I guess the second question I'm curious about is your opinion is then like finished state and failed state. And I'm not sure if our approach is actually entirely consistent across the board. Let me check that. Kruber states finished. I guess we are consistent. While they're looking for that, I just have one more thing to say, sorry. Maybe from a conceptual point of view, I guess the difference is that can occur between a credentialed send state and a credentialed finish state is that, for instance, when you sense the credential and you're waiting for it to be acknowledged, you're still having that pending action bound to happen. So maybe, I don't know, let's say you hold this state machine in a database. So knowing that something will, like you're expecting something to happen on this particular piece of data, then you might store it differently compared to a state machine that's in a finished state and nothing else is gonna happen later on. Like a finished credential or proof request or issuing a credential can basically, just be moved to some place else for auditing purposes or logging purposes or stuff like that. Like it can technically be archived. You're not gonna be using it anymore, I guess. So I guess that would be like the, just a difference in behavior which might make it more clear why a different state is more representative I think that makes sense, I mean, to distinguish them. Since it's still a kind of protocol level, which is expected, I guess it deserves at least like a separate state. Right. But then the second question is, How do treats fail? Success versus failed state. And like basically both success and fail state, they are kind of the final state, right? Because like you cannot, once you finished successfully or you failed because you received a problem report or you just failed for some reason, you're obviously not gonna move anywhere else. And that's kind of like in RFCs, they typically have one final state to, I guess make the diagram simple, I mean, it makes sense, like it's end state, but then, you know, if we kind of separate the, like for example, the send state and finish state this way, like successful state, wouldn't it also make sense to separate out the finish state into like, you know, finish success and maybe finished failure? Because the way we do it right now is we, and it's on all of the state machines, I guess except for connection one is we have status and then status is a number of different variants. Yeah, my opinion on this is that we can basically design the state machine with that generic type. And when it has the failed state passed in as the generic, it can basically, we can implement error on it and stuff like that. Maybe not, I don't even know if that would be necessary really, but I guess ultimately we could model the state machine in a way such that when you have to perform some operation, you're either gonna get, you know, the next state or like the success state or whatever or the failed state, you know, in a result. And the failed state would basically be treated as the error generic parameter of the result. Right, so then you would suggest you agree to separate them out to have a finished. Yeah, definitely. It says I'm finished failure. Again, I think for the same reason it makes sense to be able to treat them separately from a type system point of view. Yeah, obviously just because they share the same, let's call it property that they cannot move further doesn't have to be like they'd read on a diagram. Yep. Yeah, I agree. So I guess going forward, once we start to do the state pattern refactoring, I guess that's like one of the changes we should do, we can do. All right, I think I had a few more notes here. Oh, okay. So next item is the intermediate states. I believe this is kind of what we already agreed on as like it's okay to, it's fine to do, it's right to do. So for example, again, just I was reviewing a bit the state machines we have, the issuance and the verification, the issuance and presentation. And one of the things I ran into credential build. So yeah, we have this transition send credential but however it's not just sending credential, it's first creating credential, then sending it. So yeah, I believe this should be split out into like separate transitions and therefore we should have some kind of intermediate state here for credential build and then the next state credential send and so forth. Yeah, I agree. I think like we discussed before, I think that the states in the RFCs are basically just conceptual like business driven or flow driven states, whereas we can basically take advantage of some intermediate states to represent stuff that happens in the middle like this example with like when you wanna send a credential, you're first gonna create it and then you're gonna send it. And like these two operations might fail for different reasons. So again, I guess it also makes it easier to manage what's happening and where errors occur compared to having everything in one place. It goes down to I guess a broader discussion but overall I think that this is fine to do. The only, I guess, I'm just gonna call it downside although I don't think it's a downside but thinking about what we discussed with George some while ago about having the like migrating state machines from one Aries implementation maybe from a different language to Aries VCX and maybe the other way around too. Well, I mean, I see that as a nice thing to be able to do but I think that also having that involves some leg work isn't necessarily that bad. And especially this kind of things can be resolved statically like we're outside of Aries VCX. Maybe we could have a different crepe or people could contribute even saying, yeah, hey, this is how you, or these are some scripts to migrate the, I don't know, Aries VCX state machine to Occupy or the other way around and so on for people that need it. But, because that's what this might mess up a bit, right? But in the end, I think it's also like having this kind of thing is also maybe not a priority we should focus on either. Like we aim for interoperability in terms of runtime but data structures are, I guess, up for grabs. Like you can declare them as you want and as it seems fit based on the language you're working in. We obviously cannot just translate the state machines from Python to Rust or the other way around simply because we might wanna be converting them between one and the other. So we wanna have them as similar as possible. It's a nice thing to have but I will not sacrifice our own environment to like having things work as smoothly and as well as possible in Aries VCX just to be able to transfer the state machines to some other side. And I think it again, it would still be something that's possible is just gonna involve a little bit more lab work. And- Yeah, 100% agree. So I don't even have any further comment than that. Right. And yeah, actually, so basically this implemented and yeah, the next issue of this implementation coming back to the notes here is what I refer to as transition linearization. So the fact that when you call these, for example, there's more cases like this, I think, when you call currently, when you call sent credential, first issue we already like discussed just before, the fact that credential is being built here. But the second issue is that you don't really know which state you're gonna end up in, whether it's gonna be the credential send like a successful one, or if you're gonna go to the finish state with like error and I drew this picture before, I'll just pull it up real quick. Now it's on a George PR, type state pattern. Yeah, I drew this picture. So the idea is, and again, this could be done something, I'm not sure if it's worth it, but it could be theoretically done even before the state pattern. A rework might not be difficult to do actually. I don't think we should though, because that's one of the thing that the type state pattern brings along implicitly. The transitions become linear. Yeah, right, right, right. But at least whether it's gonna be done through the state pattern, we kind of agree, I guess, that it should kind of look like this, right, that you have like there will be always only one result for each transition and basically sending the problem report will be the responsibility of the state machine operator or the user. Yes. Right, cool. So that's good because we are like much aligned on all these three points and this could be kind of a guideline for all of us and also for, especially for maybe George now, once he started applying the state pattern on the additional protocol. Okay, so that brings us to the work in progress and the type step pattern. So George couldn't join us today, but he left a comment on Discord saying, I've mostly paused work on the holder state pattern and will continue after messages create is merged. I will also review messages create if slash one, it's ready. And that's consequently bringing us to the messages create and that's update for you. Yeah, so it's feature complete, finally, basically working now on implementing tests and more documentation, move some things around, made the macros use fully qualified paths and stuff like that. I am running into a small issue, which I don't really like. It's basically related to CERD's or CERD adjacent's raw value, which is basically just an untouched string slice from the input. And it's basically for avoiding this realization and we use that, we could use that in some places, like even in the attachments where you don't know exactly what you're gonna expect. So you kind of leave it to the user to deserialize based on whatever they expect to receive or conversely, I don't know, when you forward a message like implement a mediator, the inner JSON is of no importance to you really, it's just, yeah, it's basically there just so you can forward it, serialize it back after you wrap it or unwrap it and send it out. Now the problem with this is it's a bit related to how we have this new message implementation and the separation between the content and the decorators. And basically it's a bug in CERD about, or CERD JSON about that raw value, which cannot be flattened for some reason. And from what I've seen, I looked a bit closer into the implementation. It's kind of hacked into CERD JSON and it's basically hacked into working only with the deserializer that CERD JSON provides, which is fine, but when, like in most cases, but when you do some flattening or stuff like that, CERD basically delegates the input to a different serializer similarly to what we do with the message type and so on. So that's where the issue is because with, like the raw value of being made to work explicitly with, to have a special treatment in the, like the deserializer implemented for JSON alone, then other deserializers don't really know how to read it and just fail. Yeah, so I looked into it. I don't think it's gonna be a hard thing to fix. The issue has been open for like two years. So I might tour around with that just a bit to see if I can figure it out in an hour or so or get something working. Probably gonna work a bit more on it in the weekend, but anyway, in the meantime, I guess we can just use the standard value, the one that allocates, although I would like to avoid, like in the future, hopefully if I can manage to pull up a PR and fix this, then we could go back to the raw value. And in fact, having that raw value, having that ability to hold it in a structure that can be flattened would actually mean that I think we can even remove that, not unsafe, but the private stuff that we do for message deserialization, for every message deserialization, because we could just have the type in a structure and then just flatten the rest of the contents into raw values and then just deserialize that into the actual message after we treat the type. So there wouldn't be any more weird stuff going around with, we're just abusing the private parts of Saturday anymore. So it would have quite some benefits. So yeah, that would be pretty cool. Yeah, but apart from that, so in the meantime, I'm basically gonna resort to using just the value, like the allocating one, continue with implementing tests. Pretty much only tests and documentation are remaining now. There are still some renaming to be done. And there is like a short list on the PR. I'm trying to keep track of things that I do or things that are more meaningful to note or things that I run into and maybe don't wanna deal with at that very moment. So it's basically a list of things that need to be done or have been done. But yeah, that's pretty much it. And like as you guys review it, I actually encourage you to add things here if you see something that's worth adding. The sum of them, like for instance, that move the error module inside message type and so on and so forth, seemed like a good idea initially, but then it didn't because I guess it kind of makes sense, even if that's the only error that really occurs, it makes sense to have it at the top of the crate because if for some reason we would have additional errors, they would all sit there. And it's also used in the message macros. So no custom messages macros. So it's nice to have it somewhere at the top where it's easily accessible and wouldn't be moved around throughout the place. Cause again, if another error type comes along, then we might just move it back to the top level. So I think it's sitting well there. Yeah, so that's pretty much it. Not a big update, but things are moving and as you guys can, like anybody that wants and start reviewing this, like in terms of functionality, they're not, things are not really gonna change as much. So pretty much everything that was planned to be implemented is implemented. It's just a matter of like minor adjustments and mostly testing and documentation. Do you have any idea or a rough estimate to when do you think we can merge it? And once it's fully reviewed? It would be like a huge kind of a request in throughout the review. Yeah, I mean, once these things are done and once it's reviewed. So for instance, like this is a big thing and it's kind of repetitive in some way, but then not really. Like I noticed, for instance, that I missed aliasing the forward message from the routing protocol. So just the mediator stuff. Like there are these small things that can still be there. So that's why I would like it to be reviewed. I don't know exactly how long, I suspect about a couple of days, maybe two days or three days or so. I'm sure that next week we'll be able to merge it. Do you think it's possible to start integrating it, into some protocol in parallel? It's kind of a, at least we'll call it. I wouldn't want to do that until testing is done, really. Because like just how I discovered this issue and I was aware of their limitations for raw value. They just didn't occur to me before because the messages were all like in just one structure and there was no flattening. So things like this can come up and I only discovered that because I was testing the message deserialization. So I would rather at least have the testing part done, which I kind of picked up now when I'm focusing on that right now, compared to like maybe documentation that can be added later or even a rename, although I would like to have the renames done too so that we don't change things around 20 times. But documentation especially can be made sort of a bit at the later stage, I guess. Although it would be nice to have just everything in one go, I guess. Or maybe just thinking if I could somehow help out to parallelize this. And I was thinking, I was wondering, maybe it could be fairly simple to do this kind of a test, which would compare the serialized, I'm not sure, maybe it may be. Yeah, I don't know if it will be time consuming or not, but the idea was like to compare serialized messages between the two crates, the original messages and this new one to kind of check if it's producing the same stuff. It could be. It will be always deleted, so it would have any sort of long-term value, but I guess it would give us- Yes, for immediate, for the immediate, I guess of the immediate future, it would make sense. And I would suggest to do that like in the old messages crates, so that when we delete it, we would also remove those tests, but it's a good idea. Yeah, yeah, okay. Maybe I'll, if I'll have some extra- Yeah, you can write it out. Sure. It would be also nice. I'm actually gonna add it to the list. It's a really good idea. I'm gonna add it here. Okay. All right, we're coming to almost the last point of the upcoming work, so I don't think there's like any items, like anything surprising, anything new, but I guess I'll just put it here, like finish to kind of conclude, like what's going on and what's like the main, well, it's called the short-term goals. So I guess to finish the messages crates, finish, and then subsequently integrate, integrate messages, to create into areas VCX. And then we would go with the type state pattern, and we would do that obviously on main, the main pieces to be applied on the holder, the initial holder, started by George, credential, issuer, truth, verifier, up truth, I'll call it presentation, and presentation, truber. And, yeah, potentially, well, I guess I wouldn't like really depth into this one, but I would like to, I would like to actually, before we do the type state pattern, I would like to apply these kind of changes we discussed here, or at least some of them, on the proof, no, on the verifier and on the issuer, to kind of minimize the amount of changes doing the type state pattern transition. And I don't know if it could save us speaking from Upsos perspective, it will be nice for us to have a smooth transition, or we don't actually have to, where we can migrate stuff piece by piece without having to kind of totally different data structures and implementations for the old approach and new approach. So I would try to actually put the code without the type state pattern as it is today, try to apply some of these improvements, and then hopefully that will kind of minimize the breaking changes doing the type state pattern itself, make it kind of easier to think about. I don't know, I personally still think that it's some effort that will go to waste, but maybe I'm not familiar with the, I didn't look at all the state machines. So I'm not exactly, I'm not particularly familiar with what's happening under the hood, but I am quite sure that migrating to type state pattern will involve some amount of reworking the methods and all that, so. Oh, it's fine with the methods, I'm more concerned about, mainly concerned about the state, the serialization format itself. So if it will be, basically you can do the type state pattern as far as you don't significantly change. Yeah, but we can basically have that conversion in place currently for like exactly how you did right now to basically be able to deserialize the old format, convert it to the new one and just work with that from this point on. Yeah, that's something we definitely won't avoid, I wasn't even able to avoid it right now. So. Yeah, no, that's probably, yeah, that's probably unnecessary evil. That's right. Okay, and I think after, this is like, this is gonna keep us busy, I think for quite a while, maybe I would say a month at least, but I guess after that, although it's just to put more focus on is the VDR tools, basically replacing the VDR tools with the smaller libraries and that work has been done for the credential holder, I suppose. And a prover or no, maybe very far. I don't remember it with my head, but in some, when using every VCX, like from certain roles on using only certain state machines, from certain roles, then we can basically avoid using VDR tools today as Georgian submitted that PR with a feature flag, but we should kind of push this further to just be able to get rid of VDR tools across the board and use all of the protocols without VDR tools and then hopefully eventually delete it completely. That will liberate us from like lots of, well, from VDR tools, which do have accurate technical depth and it also has potential to, I guess simplify the code base a bit. Well, it's hidden behind the trades and I think those trades will stay. So maybe won't have significant impact on the code base itself as it's kind of shielded off even today, but it's still an important step in the evolution to get rid of this. Do you have, George, do you have any? There's no George here. On like a road, maybe on the roadmap here, you know? I don't know, the type stamp pattern, the messages trade, and then my- There is no resolver and the DID parser as well. Mm-hmm. Okay, I'll put this kind of like in same level that it's resolver. And that could be actually like pretty much paralyzed then I guess by multiple people. Yeah. It resolver, what's going on here? Yeah. Yeah, did- And DID parser, yeah. And that's kind of basically bringing us pretty much Well, that already covers many of these points here in the backlog, I think after, and I guess like after we have all of these, then I would suggest then we can actually start doing these like 2.0 protocols and maybe then start looking into did come 2.0 or so. Mm-hmm. And it'll be beautiful. Even in the meantime, we have a mentee which could help us doing the uni FFI and mediator client. That would be really, really nice. Well, we'll see how the program goes. Well, we covered everything. So, six minutes before 11. Do you have anything else to cover? Nope, that's it for me. All right. All in that case, stop sharing my screen. Thanks for joining in and thanks everyone for listening and we'll be here again next week. Cheers. Have a good one. All right.