 Hey, good afternoon. Hey, how's it going? It's all good. Thursday once again, it always goes, it always passed so fast, like we have the Thursday, Thursday call, and then Friday, you know, Fridays, maybe the lighter day. And so we can then usually are busy Monday Tuesday and Sunday, one week past again, we are back here. Exactly. I feel the same. Let's wait a minute or two. For others. I believe I believe book done is coming. Usually he's here already. Well, let's get a minute. And let's drop out this meeting analytics AI. Good morning. Let's get started. Welcome once again to air this is coming to go it's 2023 18th of May. These are high pleasure and that trust policy notice. All right. So let's get into it. Yeah, so to kick off the meeting. I'm sure there's some people tuning into this. I'm eager to hear about mentorship program update. So as you might know, we have entered we have entered the next stage of the program where the application has been closed and now it's time for us, the mentors to go through the applications and select two applicants one for each of the two projects we we have out there. There has been over 50 full applications for the two projects overall so very high interest. There's a lots of also smart people and hard working apparently hard working and eager, eager people to for good as applied so it will be difficult, difficult to choose only two I mean, lots of people. So, please don't be disappointed if you are not chosen but we'll we'll try to. And we'll be going over our the applications. By the end of the month. And by the end of the month. Now, moving on to the good first issues. We have, well, I'll start with the easier piece that there's a new issue I have created. And then just some refactoring some code moving some functions from one great to another. I think it should be fairly clear but if you'd like to pick this up and you have any questions don't have state to ask here. Other thing kind of up for discussion I chose the data idea that we would kind of try to ask people or like, try to live by this kind of rule that is for people who do like good first issues. One person shouldn't know try to try to book multiple good first issues at once, unless, you know, unless there's something like one person should only have one good first issue in progress and ready, you know, try to ready motivate to finish things till the end, then come trying to book up multiple issues at the same time but then maybe maybe not get them done and perhaps like discourage others to picking picking that issue up. Yeah, good idea. I just, it's nothing. Listeners here. Don't don't take this in any way personally just I saw that was issues. I just saw this sumante ex the road on multiple multiple of the issues, I think that he would like to pick them up. But I know there's, there is an issue from him in progress and needs to be driven to the end so. Sumante ex if you're turning into this. Let's try to finish this one up before picking up the other other pieces. All right, but yeah I'm very happy that there is an interest and people want to work on these things. It's awesome. I love to see even in even more when we when we are merging this PR and now linking to to that topic actually I'll move on to the work which has been done since last week so there's been a successful like a recent I think not first time contributor but maybe second PR from from not she could come away. This has been merged so props props to thank you for contribution. And then, moving to other stuff those small kind of refactoring or rather just renaming where we renamed the dog builder crate to the dog. And, and our original deep dog without underscore to deep dog legacy, and the plan will be now to go ahead and basically get rid of this and instead in favor of the deep dog which is much more robust crate to work with. That's still pending work to be done. And next up we had extended a very far as API to accept like wider range of at the essentially additional format of the input, which is more powerful. There's some rational here explaining what exactly is going on. And lastly, probably the biggest piece which has been merged since last week was implementation of base ledger trade using on on top of vdr proxy client. So essentially, this enables you to use areas vcs but call the ledger through in the vdr proxy. This is pretty cool and also part of this PR was quite nice refactoring kind of the coupling of different components, in particular the submitter trade has been taken out. So, now we have in the vdr implementation but that can be injected with different like transaction submitters and so one submitter, the original one would essentially submit transactions through zero and q directly, whereas the new submitter Mira has implemented here will be submitting transactions through in the vdr proxy, aka HTTPS. So that was pretty nice. That's all that that that's what we have done. And we have lots of stuff in progress right now as well. Linking to the vdr proxy and all the ledger stuff we have in the vdr ledger response parser. And this is amazing again. This has, there has been lots of Mexico originally. I guess it was like, like, is this approach to parse transactions and it and it worked. But nevertheless, it was hard to deal with and hard to verify whether the implementation is actually correct or not. There's this parser now, which is basically all these methods are much shorter now. Instead, we are calling this parser. And if we look at the implementation of this parsing method. I'm going to review this at high level. Where is this? That'll be probably the parser lib dot, lib dot RS. And what the, what this parsing method does is it parses the string response from the ledger to a date, you know, typed type data model of ledger responses. And all these basically ledger ledger respond types are in the domain director right here. And this was not written from scratch instead this was taken out literally like copy pasted out of vdr tools. So this, this seems to be a pretty useful piece of vdr tools to get out. It's, it's duplicated right now. As this code is also used inside of vdr tools itself, but it serves the purpose well here. And so, what do we do in these parsing methods again. We, we get the string response from, you know, vdr, in the vdr submitter transaction submitter, then pass the string to this parser, we map it into basically parse it into the like ledger kind of response data model. And then we remap the ledger response model to, to the, this uncred, well, it's not exactly the credits types, which in the vdr base ledger implementation is returning. So that will be returning these in the data types. Where is it. Let's see some example of the schema one. schema is taken from here in the vdr ledger. Wait, these are requests. And we're returning here. I can open it up just came in on second. ours gets get credential definition response. This is the ledger response. And would we return from here. This goes to, yeah, this is in the indie data types, which is from the indie shared RS repository, aka credits and family. So that's how these works. Yeah, and so it's, it's pretty like nice. It's, it's, it feels much safer now. So does that, that new create the indie ledger response parser. Does that have any dependencies on vdr tools or is it completely separate. Yeah, I believe it does. We can trace it out. Yeah, so it. Right. Okay. Well, this is in the day, literally the tools in the data types. And you are asking about in the data types, right. If it depended on vdr tools at all. Right. Yeah, seems like it does so far. I'm not exactly sure why. Since a whole bunch of code was like copy paste it extracted out of vdr tools. In the data types we can take a quick look because I'm not sure myself. Oh, okay, seems like it's only some sort of error types. Okay, sure why I feel like this would be possible to get rid of it seems like just types, maybe kind of a shortcut. Yeah, part of the copy pasting was just to do that for now. But yeah guys, I reviewed this already and looks like good to me, but go ahead and take a look as well. I don't to merge it with like just on my own so we have at least two approvals would be nice. All right. Let's, let's go further. Right this is in progress but it's, it's waiting for review essentially response parser this one. The next step we have typing for the poor structures. It's yours George. I know you described it very nicely. This issue is it read so I don't think I don't think we have to like go through it or explain it too much but You want to leave some words here. Yeah, I think I think all I have left is to fix up some tests. Yeah, like the types for both APIs are all there and just been reworking the tests to use types instead. I also found some older tests that were forcing itself to use indie and not try to use modular libs. So now I'm trying to rework some of them so tests on both. But yeah, hopefully I can finish this soon. Yeah, it's good to get away from a lot of the Jason manipulation that was going on previously. Yeah, so actually I dropped off for a second or two, actually for a couple of seconds. I didn't hear everything but I'm sure it's recorded, but Yeah, I guess what you're saying is that you basically you go rid of those streams are the bits the main idea. Yeah, yeah exactly and so internally we don't have to manipulate Jason's as much. Yeah, that's that's really awesome. I mean this kind of changes. The technical depth we use spookiness for a long time. I'm really happy to see this coming up. Also around the trades. That's actually the next item here. So also the typing around the trades we have the tech bash. And on the typing here and around the baseball at trade interface, I see it's just missing some tests. So hopefully you can drive this. I see that is kind of actively working on this. So that's, that's great. And I've been helping him a bit with this it seems pretty close. Just some of the data mappings are not working. But I think it should be an easy fix at the moment, not to speak on his behalf but it seems like it's mostly been working on fixing the tags. And part of the original issue, but there's still the wallet records truck that needs to be implemented so maybe that could be another PR. I'll let him know after this call there and see what he wants to do. Awesome awesome very nice. Okay, and lastly, we have this item for critics baby sure and that's been researched. And I guess this one worked on as well. There is not any PR yet, but I know it's in progress. How it's going. The slow but steady pretty much. The PR will come soon. The thing is that I didn't really have anything to let's say anything worth looking at. So I didn't push anything yet. But I'm actually just one method away from the trade to kind of having it all implemented. At least as an initial draft, I guess so then it can be reviewed and we can start implementing tests and whatnot to see that it's actually working. Yeah, so it's things are moving. Definitely the typing would have helped with all of this, but yeah, we'll get there. As in the non creds RS typing. No, no, no, it's like just critics I'm referring to the having strong types to the traits and everything because I know I'm for me not being that familiar with how everything works. Although this is an opportunity to kind of go in depth and learn more and so that's that's good. But not being extremely familiar with how everything works kind of makes it difficult to track because you're just getting a string here a string there you have to convert it something pass it somewhere it gets converted internally again it's like a whole mess and you kind of lose track of what you're expecting where and how it's supposed to look like and all that stuff. But yeah, we're going to fix that one day. So, looking forward to that. Have you um, have you had to store stuff in the wallet. Yes. Okay. And we're also going to need that's basically the remaining thing we're going to need a reference or basically just a wallet handle to another wallet sorry ledger for when revocations are getting published by the issuer. There's no way around that. Yeah, so I'm going to have to extend that type just a bit. I don't expect it to be that big of a deal. Like I said, like the implementation itself. I'm just not sure you said that caught my attention that part with the ledger and the publishing because the mirror was just not sure if you I think he broke down some trade method into like two smaller methods and it was something about the relocation and the ledger. And I'm just not sure if you if you have the latest version and maybe they'll solve your problem, you know, if things got merged they probably don't but Yeah, let me have a look at this. Oh, they find it as a six core and then unknown credits credits unknown credits. And to be the last one publish something publish revocations. Right. Yeah, yeah, it was then I don't mean just to make public local revel revocations. Yeah, yeah, that's what he kind of split into two methods essentially so you don't you won't not need ledger because he took the interface such that it's not here anymore it somewhere else. Okay. Show history for selection. Yeah, it will it used to be like this publish local locations and internally, basically was calling these two methods. And these two methods they deal with the wallet, because that's where we store the intermediate delta after the revocation has been applied. Yeah, but it's not about a wallet. So okay the wallet, the wallet is already there anyway so that's that. I believe like the type change. Can you have a look. Sorry, not here. Not here. Yeah, the actual type that the trade is implemented for. I don't know what's it's called. It's still base. This is the base trade. Okay, can you have a look at the credits implementation. Right. That's fine. But can you can you scroll up a bit more more like where the type is defined. It's the same time. Okay, so the wallet is still there. Yeah, because I think like there are a lot of places where you need wallet here so if you only change that. Okay, but so how does it work now so So it was it was actually this PR. I think that was kind of merged at the at the verge of when you were starting and when he was like it was merged around the time we were starting on this and I think he missed it. I think we went through this last week or like mentioned it and it was this split publishable for locations. Okay. Yeah, I didn't see this. Yeah, well he did like necessary modifications code to avoid, you know, having an uncred straight to be aware of last week. Okay, cool. All right, then I'll I'll rebase and that means it's going to be easier to implement that last part. And then we should be able to have a look at this too and I can start start looking at tests. So okay that's even better. Let's work. So you won't need the ledger in the non creds implementation. I assume, I mean, that's what that's what it looks like that was the purpose of the PR Patrick just showed. So in the beginning, you're also modified the in the implementation like the live video tools implementation. That's basically the whole idea not to depend on the full handle so I assume the ledger transaction sending was moved somewhere up in the stack. Technically down in the stack anyway. Yeah, cool. Yeah. All right, and almost there in terms of the credits issuer. We'll see how the, how the testing goes and the compatibility issues we might run into and migration. That can be stopped. Yeah, so I mean the implementation like this first draft was one side of things then there's going to be testing, but then I guess the most kind of important part will be to kind of have that migration in place. Like we discussed with George I believe last week to have some sort of routine that runs when and on the agent starts or something so that it's going to migrate these credentials and all the wallet objects to the credit implementation from live VR tools. And then we're able to kind of drop with your tools, at least most part of it. That would be a separate PR right. Yeah, I mean, I don't know, maybe, probably, I guess I guess it's, it would be a separate PR so the PR for this will basically just revolve around the credits issuer implementation and maybe having the tests pass. And once that's done, we can have the migration implemented, and it can be like completely separate that doesn't even have to be here it has to be just can be some sort of binary that gets called or something, I don't know, or even just some library in one method. Yeah, so I guess that would make things easy, at least from an implementation perspective, and then we would be able to kind of drop the legacy live VR tools, apart from the wallet. That's right. We're going to get rid of lots of code. I'm very much excited for that. That's how I was saying before I'll be definitely opening champagne when we delete like 30,000 lines of code. I think that there might still be some remaining parts of the individual ledger implementation that would need to be implemented to fully use modular lips as an issuer. Yeah, yeah, I know that Miro, he's aware of it. And I think he was saying that he's kind of waiting for the credits based issuer to be implemented for him to kind of reasonably be able to test the individual issuer. I don't really remember, like, the vertical reason why is it so, but that's, that's what I remember. But wait, so what's the problem regarding the ledger? You can explain if you're on Patrick. No, go ahead. Oh yeah. So when you're using a modular profile, as in you're using the individual ledger plus credits and uncreds. There's still some issuer related ledger transactions that haven't been implemented for the individual implementation. I think specifically around some revocation publishing type transactions. Okay. And so, yeah, I'm sort of, I'm sort of worrying now that in order for you to fully test the, the credits and uncreds issue a functionality. You know, this ledger publishing implementation might need to be fleshed out as well. Or it can be left to later and it would. Technically could use like, I believe you, you know, in order to test credits, I guess, technically you could use combination of like credits issue like, like, and uncreds plus VDR tool ledger. All right, that should work. Yeah, I guess I mean, it's, it's the same ledger, same ecosystem, still in the so it should be fine. But so I see that the ledger, there's the base ledger trade and it has two implementations right now in the ledger and in the VDR ledger. So, I assume the VDR, the individual ledger is the VDR tools implementation. That's what really. So in the ledger is the like VDR tools. I can see how this is confusing. Yeah, it is the key ledger to make it more confusing. I have to make the VDR ledger that's the, that's the, that's the individual. You dropped out for a second Patrick. In the VDR is in the VDR for a second Patrick. I couldn't hear what you were saying but I got it. Yeah, it just, it just kind of easy to get the names like confused in the VR tools. I just want to say that it makes me feel better about myself when I see that other people are as bad as naming as I am. I mean, it's hard to reach my performance, but you guys are getting close. Okay. Well, all right. So, then the in the ledger is the, okay in the SDK and VDR ledger. That makes sense because I see stuff is not implemented here. Okay. But I still don't think there are there would be issues necessarily in terms of like using the, let's say legacy ledger. I don't think so. I think that's really fine. It would make testing pretty complicated because at the moment it's set up to use, at the moment it's set up to use the modular Libs profile, which is how the VDR ledger and the Credex implementations get their coverage, get that testing done. So if you switch that to the VDR tools ledger, then you're missing a lot of coverage and testing for the VDR, Indie VDR ledger. Geez, this is hard to keep up with these names. Okay. Yeah, but that should be fine. I mean, like, just for purpose of testing, like Credex implementation, we could like switch, tweak that profile to essentially use VDR tools ledger, but then obviously we don't want to affect like coverage and CI so we could keep that. And I don't know we can like, like, add a new job which like using a profile with different combination, just, I mean, we won't decrease the coverage, but we can find a way to test Credex, if in the meantime, the issue portion of Indie VDR ledger is not yet implemented. Yeah, fair enough. Okay. I'll work around it and maybe implement another profile and combine the two things. It's cool. Yeah. But I'm just just on that. If we go that route, maybe just add a comment that we should get rid of it as soon as possible. It'll be a nightmare to maintain like three different testing profiles. Yeah, yeah, for sure. But but mirror is he's aware of it and it's he's now like, like, deep, like rapid hold down in all the ledger stuff so he'll finish this piece as well. Okay, great. And then we can just unify it and use Indie VDR gets rid of VDR tools ledger completely and that would itself I guess it was to lead lots of code. All right. This kind of bring us almost to the end. This is just kind of left over from the last week but nevertheless, I guess the priorities are pretty clear just kind of sort out these like interfaces. These ledger trades and crates trades and basically everything we discussed. And the type state pattern I guess it was kind of shifted right now on like a second rail like in favor of these these breakdown. If I will have time I would, well, what I personally wanted to as for myself as in a role of implementer, I would like to get planning to get to be working on the deep dog crates. We tried integration so we could get rid of the deep dog legacy, but then after that I would like to move on to the type state pattern and perhaps continue in the holder. And if time allows then try to implement a counter party and finally do some testing but yeah, we'll see we'll see. I just wanted to kind of add, maybe the reason of why this moved to a secondary position in terms of priorities, because the ledger and the wallet stuff and the anum cred stuff they're like lower level more close to the foundation of the entire library so it's better to kind of start from the bottom up rather than start from the top and modify the. I mean it's not necessarily that it's bad but it's just it would just make everything much simpler to kind of accommodate the lower levels and the core of the library in the protocols and the state machines once that's done rather than do it the other way around and work a lot on the protocols and state machines and then realize yeah but we need to do a lot of work on the on the various VCX core stuff. Not necessarily that they're maybe conflicting as much but in terms of functionality, a state machine is used in one protocol but this stuff, this is being used across the entire library right so it kind of makes sense that it's more important to kind of finish that first and then move to the other parts. Maybe there's also the aspect of maybe George knows what I'm talking about the matter of kind of the. I don't know maybe let's call them problems or breaking changes that would come with the implementations of five state patterns for the state machines. I know like there's a lot to to discuss in terms of how to implement them. And maybe what changes we should make what changes we should not make. Because, like not only is the, I guess it also depends on how we're going to go about it whether just do type state first and then maybe work towards the. The modify states to kind of have them about message generation and not message sending, but that means the states change and the serialization format changes and as a result things might be, you know, different. So we also kind of have to consider that just just brainstorming ignore me if I'm babbling around. I know you're totally right. Yeah, it's a very difficult problem. Yeah, and the thing is that at least because I wanted to kind of work on implementing that these kind of states for the connection kind of have that finished as a state machine that's in the condition we want all of them to get to kind of have that as some sort of a model but the idea is that I basically just looked and thought about it a lot and I mean this thing came around with the core traits so that's more important right now. But a lot of questions kind of popped into my head about how to go about this. And pretty much just the consequences that this will have and how to go about that and the thing that's important is with the states changing and maybe holding some sort of message because that was the point right you. The state is not no longer going to be about the message was sent the state is about messages generated, you can pick it up from here, send it however many times you want and handle the errors that can come from that. But I'm fairly confident that with this with how the states are right now and how the states would look like with that done. I'm not sure there's going to there's always going to be a conversion that can happen between an old state and a new state because the message might contain additional stuff that's simply not available in the state machine. As they are right now. So, we can have to to see how to go about that. Yeah, I know what you mean. You might, we might do changes which might not be like, though the every old state might be possible to map out to like a new state. Right. Yeah, yeah, yeah, that's that's what I mean. So there's a lot to discuss there in terms of how to go about it. But yeah, just this is just a heads up. You will, you'll see, you'll see what I mean. But again, we can maybe focus on just getting the type state pattern because that would also make everything a bit, not a bit but a lot nicer to look and work with to look at and work with. Because the states would be more well defined and the transitions would be better defined. And that doesn't necessarily change the states themselves at least not that much and not in an incompatible manner, I believe, like we can we can definitely arrange something. So maybe we can focus on that first, and then kind of think about how to go about the, the states changes themselves. Yeah, I mean, I think we should just like, re-implement those state machines like just kind of forget how we do us and just do like best version of the state machines we can. And then retroactively see like, okay, can we like, first of all, like, do we want that do we need. Is there people who need like to do these conversions. And then what's, what's the, how can we reasonably, if yes, then how can we somehow reasonably convert the old state machines to the new ones, if we decide to do it. Sorry, just, I'll try to first thing I guess the tricky states are really the intermediate states. So, the, like the beginning, the beginner states were the beginning states, incipient states, and the complete states or final states. Initial and final states should be fairly easy to map because there's not a lot going on, right. So, ultimately, if the protocol has just started or is complete, you kind of end up with the same stuff and especially because there aren't really messages to send. We're in the initial state if there are messages to send then they're constructed from pretty much nothing, right. So, that should be doable. And I guess that's an important thing because if you're in the middle of a protocol and you lose that state machine, I know it cannot be converted because the state's changed. I guess you can restart the protocol and whatnot. Probably the more the more important part is having the complete protocols still there. Maybe, I don't know. Yeah, I definitely agree. Usually, like typically these protocols, they conclude quickly like they either never finish and it's not like you keep the state machines in like intermediate state for days, I think. I guess there could be cases maybe with issuance of credentials, but I think it's kind of an edge case. What's your take, George, on this, you know, possible issue with conversions of intermediate type 10, you know, of old intermediate states to like a new states with the type 10 pattern. Yeah, I think it's going to be extremely hard if not impossible to have some of the old serialized formats be able to deserialize into the new transitions, which is partly why I was pushing for the from parts and into parts concepts a while ago, because that lets consumers such as myself now be able to get the raw parts of what's being stored in that state machine, such as the underlying messages that are being used and things like that. And so then, if there's this huge migration to this new type state pattern, then you don't have to worry about serialization formats because you have the raw data in your own format. Yeah, I don't go ahead. Yeah, I was going to say, I think in general, that's maybe the approach that we should encourage people to take rather than relying on a serialized format and storing that in their database or wherever for a long period of time and expecting it to always be compatible. It's for them. Right. So, the thing is that I'm sure that doesn't help at all with the problem that we're going to have, because the front parts thing kind of takes the state itself. Even if you break that down, like the problem comes from the fact that in the current in the states as they are right now, there is some information. But that information, I'm sure that in some cases is not enough, it's simply not enough to construct the message, like the full message that you would be sending, which will be actually stored in the new states. So, regardless of how you split that consumers will simply not have the data that they need to reconstruct that message. In some cases. And therefore I don't I don't think that's necessarily a problem. Yeah, I don't know what you mean, maybe just to clarify it more is like, for example, let's find some state machines which is doing. Some particular example where this will be impossible. What you describe the issuance holder state machine there must be some sense state. Request send yeah that could be a good candidate I think request send state. Request send request. Okay, so, so when you are like, for example, right now when you are in the state offer potential offer received. When you decide to like proceed where you would call function send request. And what that does is, it will like generate credential request out of that offer, would you store, and it will just like send that credential request, and that will bring you to state credential request send state right. But, and here you no longer stored the message, you know the credential request message you have generated by another example. But with the new, the new design right I think even if I look at the PR from from George, there's already if you look at the, at the states. There is going to be state request prepared, which contains credential request message. So if you previously stored like state machine in the request and state. It's impossible to, and this is kind of equivalent of it because here you prepare it and then as a as a owner of the state machine, you need to take care of sending right. Then, then it will be impossible to basically map this state on to this state because this piece of information is is missing here. Right. So just thinking on the spot about that. I don't think it would matter if you'd already sent that request message in this old state machine pattern. And then you convert it into the new type state pattern. You can probably could just put a dummy message in there because you don't care about that message you've already sent it. That's true. Yeah, theoretically I think that would be true. Yeah, but I don't know. I don't know about that like it sounds like it could work. Sorry, but it also sounds like it can cause more trouble and confusion rather than anything else. Maybe another example of this is in the connection. If you look at the Inviter States. Connection Inviter States. And if you look at the responded state. So this means that the response the signed response was sent. In the future with the approach we want to take the response will the sign response will actually be stored in here and consumers would take it from here and send it right. Right now we actually have the response but it's in the requested state so the request comes in and this messages process then the response is generated. It's basically sent from there, but like with the transition itself from requested to responded is about sending that signed response if you look in the requested state is just a follow up. That's going to make a bit more sense. But if you were to map these states to the new format. You cannot map this the requested state to the responded state. Although. Yeah actually technically you could that. I assume that's what it really means you have the sign response and then you have to send it but. Yeah, I don't know. Yeah, maybe let's just implement those new state machines like, you know, a proper way and then I guess we'll see. Maybe there'll be strategies to like make compromise, like, for example, with a feeling the dummy message and maybe that could like be potential solution how to migrate like all state machines kind of convert them to the new one even with the intermediate states. I guess, but we'll see when we start working on it. Yeah, I think George might be on to something in terms of that we might be able to map. Maybe not like not the one or not exactly the same state like, like it's here the requested state, the old requested state and connection to the new requested state and connection but rather the requested state here. It might not be the case everywhere but the requested state here would technically be mapped to the responded state in the future. Because that's when the response is generated. Yeah, my state was a bit of an edge case right where we're generating the message. Yeah, I wanted to change it but yeah I wanted to change it when I worked on this but it would be breaking it was breaking a lot of stuff so I kind of just left it like that. Make it the problem for another day but yeah. Okay, we'll see. But yeah this was mainly just a heads up. Sorry. Sorry, did that discussion start with us saying that core work should be the first priority there. Or was that not the takeaway. What should be priority. I thought that conversation just then started with that we should approach things bottom up and look at the core implementation before type states. Yeah, I mean it's kind of what we ended up doing, not necessarily that it would be a problem. There's a lot of work in both places so and it's not that they're conflicting that much but it's more about the impact I guess that the changes have so the the core traits and areas VCS core overall kind of has is being used you know across the entirety of areas VCS of the library. Whereas a state machine or a single protocol is basically more self contained. So that that's really just all there is to it's mostly about the impact. All of it is important work nevertheless. Yeah, sure. All right, I just wanted to note like one one thing actually, I guess, maybe not yet in progress but definitely upcoming. As the mirror is neck deep in in ledger stuff. He also wants to implement caching for in the VDR, because in the VDR itself doesn't cash anything. So now he's basically did this like some like transaction submitter component, then, then the one in waiting for review is like response parser. And then one more component to be added is the like caching component and technically you could swap out the caching components as you want, like you have a custom implant on implementation for it but will be definitely providing and so yeah I'll be caching. I'm sure you appreciate it upgrade especially like mobile environment I feel like it could be useful. Yeah, especially around revocation proofs. I find that sometimes it can take like 10 seconds to create a proof just because of all the ledger transactions that need to happen. Oh, yeah, that's so hot. And yeah, I think caching shouldn't be too bad, because the video in DVD our ledger implementation already has access to a wallet. So theoretically, the case could just be wallet records. Yeah. But my only suggestion around that if you if you take in suggestions is maybe the caching should be optional and there's some flag to. Yeah, I mean, it's, it's very like caching is, as they say like the hardest problems in a CSS caching and naming variables right like caching is very opinionated like when do you invalidate cash and how do you do it and stuff like that so it'll be definitely like optional and swappable. So, you could use your own or not use it at all, if you'd like to. So you were suggesting that the wallet could be used for caching right. Yeah, yeah, it could be for sure. I was thinking about doing that myself as a as a consumer. But yeah, the video ledger already has access to the wallets and may as well. So if I may rise as implants like about implementation in particular but I'll definitely let him know and I think he's just about to start this kind of this piece. Yeah, there should be other implementations to look at as well. I think acupy acupyze usage of indie video might put some caching on top. And also of course video tools I believe does caching. Individually R is doing caching. Oh sorry that video tools, I believe. Maybe I'm maybe use the right maybe I just misheard I'm not sure but yeah video tools here correct. I guess I guess that kind of that's kind of concluding our call it's it's so it just one minute after we started one minute after so it's perfect 60 minutes. And anything else from your side guys. Did you go to the open wallet meeting. Oh yeah, unfortunately, I missed it. So I want to take a look at the recording and try to form my opinion and some thoughts about it. Yeah, cool. Do you know if it was recorded. I believe so yeah because it's it was just basically just like regular every work group call and all these hydrologicals are recorded so it should be up there. Hey guys. Friday. We can come in soon again. So have a wonderful rest of your day and enjoy the rest of the week. Thank you. Thanks. See you.