 I saw you talk but I didn't hear anything but that might just be me. How about for everybody else? Same. Nice background, Lynn. Thank you. I think that Zoom provided. I don't think I did anything special to get it. So I can't hear you, I think, Lance. Or hear me? Or I hear you now. Can you hear me? I hear Lance. I think I can't hear anyone. Yeah, it's you. I still can't hear you. How about now? I don't know. Oh, now, yeah. Okay, good. Seems like we have a very small crowd today. Not sure if this is because of the new link or because there's some event today. Does anyone know that? I'm not aware of anything. James had a late night last night. I think, yeah, what time is this for you? It's quite early, right? Yeah, it was a little later and then the time switch happened. It's nine o'clock now. I think seven for him, though. Ah, okay, yeah. Yeah, that's quite early. So I think we're now at always in conflict with time zones. We can probably see this at hyperledger. It's now anchored in UTC minus eight. Update that one. Okay, cool. I think we can get started, small crowd today, but that shouldn't... But a good one. Yeah, exactly. All right. Well, welcome, everyone, to the AirSim JavaScript call of March 30. Need to remember you to abide by the hyperledger code of contact and antitrust policy. If you would like to add yourself to the Standy's list, feel free to do so. I've shared the link in chat. I can send it again. Is there anyone new here today that would like to introduce themselves? I think I recognize all names. All right. Ben, let me see. Status updates. Was the bifold call this week? Do you know that, Ryan? Yes, it was a short one, though. Mostly just catching up on some open PRs. And I believe the restructuring PR got merged in yesterday. Okay, yeah. I was happy and sad for that at the same time, because it's nice that it's being done. But it means I have to probably update my PR. And there's a lot of merge conflicts, I would expect. Yeah, okay. So that's going to be fun. Let me see. Restructuring off repo to monorepo. Merged. Cool. Okay. Any other status updates? Does someone at the Aeroids Working Group call this week? Yes, I was in there. Even host. Trying to remember. Yeah. Steven, right there. It was a presentation from Thomas. Yes, right, from Nessus. Yeah, Thomas Tesler did his presentation of his Nessus command line client. That's very nice. It's got autocomplete and aliasing and a bunch of other things, essentially for doing the W3C verifiable credential scenario that talks about like an airport scenario of traveling with a miner. And yeah, that's a really nice demo and it's improving all the time. So the communications that he's building towards, or DITCOM v2 and the DITCOM v2 protocols for issue credential and things like that. And that demo has DITCOM v2 support, but also DITCOM v1. He shows interactions with Akapai as well. So it's a really nice demo, super nice client. Definitely would be very useful for just general demonstrations and things like that. Roots ID, well, I guess it's a good segue. Essentially, the April IW, we're doing a DITCOM v2 connect upon that one there. And he won't be at IW, but we're going to do like a recording with his scenario and kind of use that scenario to frame some of our interactions, although most of what we'll be doing is just showing simple DITCOM v2 between agents. Anyways, we have that document that we've put together so far for the participants. Yeah, because I think you decided now to do it during the demo hour, right? Right. Yeah, we originally were going to do kind of a Roots ID specific DITCOM v2 demonstration, but yeah, we couldn't decide if we were going to do a session or during the demo hour to have the connectathon. And we decided to do the connectathon during the demo hour. So, yeah, we'll have several participants essentially demoing DITCOM v2 connections. Cool. Yeah, cool. Would have loved to be there. We're doing another demo so we can join, but yeah, it would be interesting to see the results of that. Do you think there's like, because I saw some discussions in, I think, the DITCOM v2 channel and like, that's still like interoperability or maybe in the DITCOM v2 from DIFF discord or something, but that interoperability is still quite hard to achieve. Yeah, yeah, Fabio Pinheiro, we've worked with him quite a bit recently for DITCOM v2 stuff and there are parts of the spec that there are some kind of interpretation type, you know, situations. And then also, there's just certain situations that a lot of us are basing our implementations, you know, on the SICPA libraries. And so if someone kind of does their own implementation and they don't have the same assumptions or, you know, the same implementation, there can be roadblocks there for interop. But that's exactly why we want to do this, you know, the big movement right now in DIFF and areas for DITCOM v2 is to have as many agents and services and mediators interacting over DITCOM v2 to kind of ferret out these interop issues that can occur because DITCOM v2 is so, well, it's broad in the sense of everything that it could possibly do. And then you have additional protocols and things like that. So any other things, Roto maybe that you want to know? No, that's fine. So it's not the problem of difficulty because fixing the problems is really easy, but just to agree how we interpret, if we send it from header or not. So this kind of things that maybe someone reading the spec just implement one way and the other implement the other ways. I need to see all the little details on the spec that say optional or not optional. I'm trying to understand what is the correct way to do the things. Yeah, for sure. And I think from Fabio's point of view, you know, he's really diving deep into it and asking hard questions. So that's good or challenging, I guess, us to all, you know, improve, which is great. And I think, you know, in some ways, he was kind of like, oh, you know, sorry if I'm kind of messing up your connect-a-thon type thing. But, you know, from my point of view, this is exactly what we want to do, right? I want us to fail early and often and just improve. So I think, yeah, we'll show quite a bit of good interop. And then we are going to learn a lot too. Cool. Yeah, makes sense. That's like you can't, you have to first go through the phase of everything doesn't work before you can get to a phase where everything does work because you have done the work to test it with each other. That's, yeah. As we all have learned, probably nothing is interoperable from the start, even with standards. Yeah, for sure. Cool. Okay. Anything else on this conflict or AP3? Yeah. All right. Then for the agenda for today, I wanted to discuss some of the open PRs and issues for zero for zero release. A short discussion about JSON-LD, credentialing AFJ. And this was over from last week. It's continuing conversation about prioritization for zero five zero. Any other topics we want to add to the agenda? I guess that's a no. All right. Yeah. For this one, I just wanted to quickly go through. I've had some people ask like, what's left for AFJ zero for zero? When can we release it? My answer is always hopefully soon because it's always difficult. But maybe we can go through the list and see what's really needed. Before you do that, can I just interrupt and ask a quick question? You may have seen, I think you heard from Anna, he's put a PR in this morning related to the BBS module, right? That was a zero three three problem. So my question is this. We've been doing some testing this week, or at least the ODS guys have been doing some wallet testing. And we have like three changes we need to make that are to the zero three three branch. We found a couple of bugs and so on. So the question then is, if we want to patch, do a patch release for zero three three, how would we go about doing that? Would we just pull out the zero three three release and put a pull request in for those changes? What would you recommend? Yeah, because you don't want to do it for the zero four release that is upcoming. Let me think. Two of them are actually going to be bugs in zero four zero as well. What's the third one? Actually possibly maybe just all three need to go in. I'll have a look, but is that the easiest thing then just put a pull request into main? Yeah, I think then we can include them in zero four zero zero four one because we don't really have to pipeline correctly set up to easily make patch or old release. We had that with the release with zero three, but that was a lot of overhead. And then we made the decision to make releases more often. And then that obviously didn't work. But yeah, I think if you can do it to main, then I can't do that. But then my question would be if they're going to run with zero zero three three in their ODS wallet, how will those changes sort of propagate back to the release that they're using? Because they need my understanding is they need this done like in zero three three by tomorrow. So it's not clear to me how that will go in. Then then yeah, that's not possible. Then someone would need to put in the work to update the airspace with JavaScript pipelines and make sure we can have another branch where we can release from like the current alpha release. So yeah. Okay. I'm not sure how we handle this then because I don't think they're in a position to be able to go to zero four zero just as a word of explanation. They're testing JSON LD credentials, issue credentials, using zero three three. And as I say, we found a few little things in the code, mostly small. I think I sent you a message this morning. One of them was a question. The other two I definitely know a bugs that need to be fixed. One was like really trippy. We missed a V out of the else L V C detail, you know, format string, literally missed one character off. That's obviously going to be wrong in all releases. So we certainly need to fix it for for the zero four zero release. And I can do that. I could put a pull request into that. It's quite small, but it's not clear to me how we get the fixes in for the code base they're using. Are you saying they really need to go to zero four zero, but that's not released yet. So it's not clear to me how we get a patch out to them. But that's what they're maybe there's another possibility that could be to to create a branch for zero three X. And put all these fixes there. I mean, review them and put the fixes there. And even if we don't have the pipelines to, to make a an official release on an MPM, maybe that can be helpful for them to apply a patch locally with the patch package package. I don't know. Of course, it's not ideal, but it's something that we can achieve soon. Yeah, that may work. I'm thinking, can you specify a path in a package JSON? Because if so, we could just in the branch, we could probably start from this one. This is the latest one before we went to zero four zero, the first one that we if we can publish the build type script, like the build files to here that you can also install just from a GitHub repo, I think, but I don't know how that worked with Mono repos and subpats. Maybe it's possible. Yeah, we, I mean, we would need to at least go to three four alpha 17 when at the moment they're on three three alpha nine. I think it's my understanding. I think that's right, Tim. I'm not sure if you know that. But so if we patch into that, what are we could be my credit zero three four alpha 18 and then just use that? I don't know how that's going to work. I've never done one of these GitHub patch releases before. Yeah. So that won't work automatically. We that's that's what I meant, like somebody would need to look at the workflow because we have like a continuous integration. And at the bottom somewhere, we have a portion stable. It's probably in a continuous deployment. We have like a release cannery. But that just looks at I think, yeah, it has a script to get the next bump and basically no like which version. But it does that by looking at the GitHub tags. And so it will not like make a zero three four, but it will actually be a zero four zero alpha release because that's currently what's also being used to on every commit release a new alpha version. So I think what Ariel said, to make a patch, like have a we could make a branch here with like the changes, but you would need to build and pack it locally and then just add like the the file like the tar with a like an MPM file link to your repo and use that instead of downloading it from MPM. Okay. I don't know whether that's going to be good enough for what ODS want to do with it. This is it's I mean, Tim, is this going to be this is going to be production code soon, isn't it? So no, we're okay. I don't yeah. I mean, maybe what we'll do is we can just fix it locally and get these fixes into the 40 as well. Yeah, that would be the easiest thing. I think just before you arrived, Tim, we were saying that the these fixes would need to go into zero four zero anyway. They are bugs that are kind of at least two of them. I think one of them I looked at my code earlier on and it's already done. But two of the three that they'll want to fix would need to go to zero four zero. So should we just do a zero four zero fix then? Yeah, let's not let's not make I don't make complicated for us to communicate on this. So yeah, we're not okay under that time pressure. Okay, all right. All right, there's there you go, Tim. I'll put a patch request in for the current zero four zero release and then we don't need to worry. Yeah, okay. Yeah, I appreciate it. Yeah, I think we want to have a bit better like many release managers for like older releases, but it's always a lot of overheads. And we just need to revamp the pipeline a bit so it's easier to do patch releases when it's really needed because yeah, sure. But it looks like on this occasion, we can we can just put them into zero four zero. So that's fine. But when we start when you start getting multiple production releases, and it gets bigger and bigger and loads of people it's going to quickly starts to get very difficult to manage doesn't it because you've got, you know, various releases of of AFJ that are being used in production all over the place and you're going to be going to need to be able to patch them on that release. But I guess that can be that's not that's not what we need to do now. So okay, I'll look at zero four zero then. Thank you, Tim. Cool. Okay, perfect. Let's hope we can just release zero for zero really soon. So if we look at the list of what's still out there. We have I think maybe this is a good one to discuss which there's like we had a default cache implementation which work well for mobile devices but not specifically for server side environments. So I made a PR to change it to an in memory cache by default, which is immobile devices, not really useful. Because whenever you close the wallet, you'll lose your whole cache. So in mobile devices, you probably want the single context, which is just like you have a like a cache that is for each wallet where we can store like the resolved dates, the resolved dates and it's mostly used for the dates and which ledgers associated with it. But Ariel made a comment which makes sense a bit like yeah, maybe it's not the best to move away from single context, especially not if not in mobile environments, I think. So do we just want to keep it at the single context storage error cache? I think for like if you have a single tenant server deployment, it also just works fine. It's just that it breaks when used in multi tenancy. So by default, if you add the multi tenancy module, you'll have a broken state for the agent. So we either need to do be a bit smarter that when you use the tenancy multi tenancy module that we automatically switch the cache implementation. But I think it's also quite standard for things to just say like by default, we have a very simple in memory cache. And if you want something more sophisticated, you can, well, either implement your own with Redis, for example, or we provide another implementation for mobile devices. But yeah, I'm also fine with just keeping it. I don't know if somebody has opinions on this or Ariel, would you give anything to add here? No, maybe my main concern is that I'm not sure, but I will say that 99% of AFC users are using it for mobile environments or single tenancy servers. So if we are going to modify something because of that particular use case, maybe it can change in the future. But for that, I mean, I think it's like that. And I agree that it's technically correct to do this default in memory implementation. But we have to make sure that when somebody upgrades their AFC for mobile environments, they will use this single context storage cache instead of the memory one. Because you know what will happen. People will tend to use the default one because it's easier to set up and also because they have a lot of things to think about when migrating to AFC 04 or so. And they will see that it works. The agent runs and it's fine. And then when they figure out that they were using this in-memory cache, it's late. I mean, they will already be in production status. That's why I had this comment. Yeah, I think it makes sense. I think currently in multi-tenancy, it will throw an error and you can just fix that. And while you won't notice anything in mobile environments, if you're using in-memory cache, and it's just every time you close the app, it's completely gone. And yeah, most people use it in mobile environments. So I think I agree. In that case, I think we can just close this PR. And maybe if we want to revisit it at a later stage to either have a more advanced way to pick the default, for example, in server, we do use in-memory, for example, and then all at once not. But yeah, then we just skip it for now. I think it's going to be to actually create an interface so that you can do either or to do what you're talking about. Because is it a heavy lift or is this a more involved, or is this simple enough that we can do both? If that makes sense? It's like you just have, you now have a cache module in AFJ where you provide the cache implementation and that's an interface with a few methods like set, get, remove all, something like that. And we have two default implementations and it's just like the default that we have provided in AFJ. So like setting it up and switching between the two as these two, implementing your own is a bit more involved. But I think yeah, you would probably within an hour, you would be able to link it to a Redis cache, for example. Okay. So I guess to clarify my understanding that is someone's not blocked. So if they want to do multi-tendency, they could do so. But we're talking about here of what's the default and we're thinking of keeping the default the same. Yeah. Yeah. So we are using multi-tendency, for example, we just define currently, we define the cache module and we specify in memory or in Redis cache as the backend. But yeah, it's just swappable. So it's just about the default. What's used if you don't define anything. Cool. I agree with the thought process of closing the PR then. Okay. Thank you for explaining. And do you need a default because if you don't put a default, you force all implementers to pick one. Sorry. Yeah, I agree. There's since the large majority of users are going to use the persistent one, I think it makes sense to make that the default versus the in memory. Okay. Yeah. I think we can do not require a default, but I think there's already quite some things you need to configure. So if we can take away, if we can provide the default that works for most people, I think yeah, that's probably good. All right. Then I'm going to close this PR and I'm going to close this issue. We can revisit it when needed. And then we have one web task removed from the to-do list. All right. I think this one, and maybe we want to postpone it to a later release. I think that's fine because it already works that way right now. But we discussed it a few weeks ago, but currently we need to resolve a dit from the ledger to know the contents of the dit document because we don't actually store the dit document when we create it, which is a bit weird. So ideally, I think we need to start storing the dit document when we create it so we can at least query like which keys are in the dit document and we have a local state of the dit document. It can get out of sync, but that's, I think, another thing that you can solve now using the import method. So if you modify the dit document outside of AFJ itself, you'll also have to import the new state into the agent. But I think, yeah, for now it's just like it's less efficient because every time we need to know a key in the dit document, we need to resolve it from the ledger, which a bit, yeah, it adds overhead. But I think it could be postponed until after the zero for zero release, I think. I tend to think of, think that getting the zero for zero out sooner rather than later is better. I think the only consideration on my mind is the, if people are working on offline functionality, it will be, this would be a problem for that, right? Yeah, yeah. But in theory, if you use that dit, in most cases, the other party would have to resolve it anyway, because this isn't an issue with paredits, which you would probably use often in offline exchanges, I think, right? Got it. That is a really good point. Yep. Yeah. No, we're currently, because we're currently working on offline exchange, those are using Bluetooth, which we are finally working. Maybe I can ask someone to give a demo of that soon. But there we use paredits and that works fine. The only thing we have to do, which will be a lot easier with zero for zero is have the schemas and credential definitions in the wallet beforehand, because there's no internet. So like we have a predefined set of schemas, credential definitions, those are, and those we can pre-cache and have in the wallet for exchange. Yep. Cool. Okay. Then I'm going to not close this issue, because it's still relevant, but I'm going to remove it from zero for zero milestone. And then other things. Oh, this is the wrong repo. Oh, no, I closed all. Let me see. Milestones, zero for zero. These have PRs open, so that will be quite simple. Update master secret to link secret. Is Berentino called today? He isn't. Okay. I think it's just, Ariel, do you know that? I think there was a PR to update the master secret to link secret in Anocrats for us, right? And that's the thing we need to do here still. Yeah. But isn't that only for the API? Because that's part of the I think because the credential request also includes these fields, we need to update those. So not sure if it's here. That's not in this one. Because if you call create credential request, it will return an object. With field. So I think this needs to be updated, right? So we, you support the new structure that needs to be released. Yeah. So if you see. Okay. Okay. So that's kind of linked to this. Yeah. So these. I think the link may have just gone to just the commit, by the way, when you clicked off. Oh, that's not what I want. That's what I thought. Yeah. Thanks. Right. So release new version of Anocrats Rust update field names in AFJ. And I think then we'd probably handle the difference between fields names in NSK, still most of the secrets extending the migration script to update NSK generated a request meant that to use links, secrets, and from now on in the, in the SDK, when returning the credential request meant that we changed the keys. All right. And yeah, I think we already have a migration script to update everything to Anocrats. So I think we can just, yeah, just change the fields that shouldn't be too much work. Cool. Is there anyone that wants to pick up this task that? Yeah, I think I can do it. But before we have to, to have the release, no, the release of the Anocrats Rust, the new one. Yeah. Are you going to do it? I think we can make a new release. I think everything is ready. So I can pick that up. I think I just have to update the versions and then we can make the release. Okay. Awesome. Okay. Then what's left in the milestone? All right. Then I think there was a last few ones in that one. Yes. I want you to explain me something about that one because you ignored me in this lack. I haven't had any time to work on Anocrats this week. Yeah, go ahead. Well, maybe I tried to start on the, on this implementation of the the, the, the non-revolved overrides, the, the, the, the time, the time of a right for, for an Anocrats Rust. But I, I wanted to be sure that I was properly understanding the problem before doing, doing so. So my, my, my, my question, as I say there, is how should I know if I have to present, to, to, to provide to the Anocrats Rust the, the overrides or, or not? Yeah. That's a good one. Did the answer from Stephen help or not? Yes. But, but it was nice to, it was good to understand, understand better the problem and how in general the, the, the application should behave. But I don't know how to handle it in regards of that particular overrides that, that is, that is expecting the, the Anocrats RS module, because it's okay. He, he says that, that, that we can, we have to reject it if it's, if the product timestamp is a result of the problem or two. But my understanding is that internally the one who will decide if the, the proof is not correct will be the Anocrats RS module, not, not, I mean, that checking, we are not going to do that check in AFJ service, issuer service, sorry, modifier service. Yeah. But yeah. So it's, this, my brain always hurts when we bring up this topic again. I understood it a few years ago and now it's not as fresh in my mind anymore. But what, how I understand it or like what, at least what we need to do is I understand it, but I'm not sure if I can describe it clearly now is. It's also what Stephen described mostly is we need to check is the, the proof, the non-revocation proof timestamp that the prover supplied within the from and to range. If so, we can just pass it to the Anocrats RS library as is and we don't need to provide any overrides. If it is outside of the range, we need to check whether it is valid that it is outside of that range. And that is by varying, I think, as Stephen said, the from and seeing if you get the right, yeah, if you get the right, the same time step back. And if so, you can pass that as an override and you say like, it's, I know it's outside of the range, but the range is, for example, set for five minutes yesterday. And the timestamp that is provided now is in the past, but it is actually the last one published to the ledger. So this time step is the valid one for this specific time range, even though it's outside of the one of the range, because it's the timestamp is when the status list was published. But the status list has an active range, which is from a date to a date. And if that from and to lies within that range, then we need to provide the override. Okay. I think that will not be done in two lines, of course, maybe it's, it's, it will, it will have some logic because in ASA, I mean, right? Yeah. Yeah. But because I think the complexity comes when we have these nested non-revoked fields. Because if we are, because in the proof, we receive, we receive an identifier subject, I think that have, have some, have some credentialed vision IDs and timestamps. So we have to do a loop in all the requested attributes and predicates to see which one you still hear every hour. Yeah, I don't hear Ariel anymore. Okay. Tell me what's my influence or something. Ariel, we can't hear you anymore. I think we lost in the middle of the discussion. I guess one of my thoughts though, Tina, from a logic perspective is do you actually have to do a comparison of the provided timestamp from the prover and meaning can you just rely on non-crets to do its validation that that's the most recent ledger transaction that was posted for the registry? Rather than trying to determine, okay, is it in this range? And if it's outside, okay, we need to go double check the ledger. Can we just rely on the ledger or the non-crets to do it? But I think the limitation is that the non-crets library doesn't really know about the ledger, it just knows about objects. And so it wouldn't really know whether it's the latest or the correct one. So it just, the only thing it can do is check the interval like the from and the to and if the timestamp provided is between, but if it's outside of there, it doesn't know whether that timestamp is valid for that range. The only thing the calling application does know that because it has the ledger and the non-crets library and the proof and together you can gain that knowledge, but otherwise we would need to allow non-crets arrest to call the ledger, which is the thing we wanted to not have to like, non-crets itself is now just crypto. So yeah, or do you have a suggestion on how we could approach that and make it simpler? Because I really don't like that this is not implemented in non-crets rust, but it's not really possible to implement in non-crets rust without having a binding to the ledger as far as I know. I think you might be right there. I guess the, if I think about it though, because the from and the to are going to always be, given the guidance for Aries are always going to be the same value, which means it's highly unlikely that you're going to get a, the transaction is going to be within the from or to. It's always going to be before or after and 99% of the cases. And I think the to, you might be able to just resolve with logic because that says, okay, and this is, that one might be easier to deal with. The, if it's before the from, you have to do a you have to do a ledger check. You have to get the transaction. Yeah. The issue being that you're saying is that we need to now handle this on the AFJ side, we can't rely on Indie SDK to do it because since we split it up and non-crets shouldn't be able to go talk to the ledger. Well, Indie SDK never implemented this and it's always been something the application layer should have checked because the Indie SDK implementation just was returned verified through. It didn't really check if it was within the range. So that's basically we added this because the way it was implemented in the SDK is not really sufficient because it was very easy to not do that check because nobody knew it. And the way we changed it now is it will fail. So most verification will fail. You need to provide the override for be able to pass, which means you're now in a position where it fails by default. If it can know it for certain, which I think is better because I think most implementations like the just means that the non-revocation check is basically useless because well, it's not useless, but it doesn't actually check the time if you use a timestamp that is outside of the range. Really, it didn't actually go in and query the ledger previously? No, because in Indie SDK, even though it supported both ledger and non-ledger, the calls were separated. And for the verified proof method, you don't provide a ledger, like a pool handle. So it's actually kind of a big issue in the SDK. And this is like why we thought we can better make it more complex to verify credentials and have it fail by default. If it can't know for certain, then have the behavior that we had before, but it does make the verification process what's already really complex in AnuQuetz more complex. But also actually doing the validation that we thought it was previously. Yeah. Okay. Well, I think that out of complexity, it's gross, but I am glad that you guys have made that change because I think the complicated for us to figure out now, but it's good that we are going to do so. So when you say that need to be called a ledger, it's like the Rust library calling or asking for the provocation entries options, right? Is that what I need to retrieve? Yeah. Yeah, that needs to be happened, right? That should be that way. Yeah. Yeah. And I think the nice thing about this with the new AnuQuetz implementation is that we implement this once and like every ledger that is integrated into A of J will have that out of the box. So it's like not something like if we have implemented once it works for Indy, it will work for Kedano, it will work for checks. Yeah. Okay. I got disconnected for some time, but I think I understood. But something that I was wondering is when we create the proof request, we are querying the ledger to get the revocation status list and everything, right? So at that moment, when we create the proof request, we can use the timestamp given by the ledger instead of the one we wanted. So that one will be always correct because it's the one from the ledger. I don't think we query the ledger when we create a proof request. I think the proof request is just a JSON object. Yeah. But I mean in AF shape, for instance. Do we do that? I don't think you have to query the, I may be wrong, but I don't think you have to query the ledger when you create the proof request. When you create the proof, obviously you do, but yes. But maybe I'm misunderstanding, but I think that there are some steps where we are querying the ledger. For instance, we are querying it to get the differential definition and all that stuff. I guess to be specific is the, because according to my understanding, when it comes to requesting proof of non-revocation, you're specifying just timestamp. You don't actually do any lookup of the revocation deltas or updates. I think you may actually get the, the most I think you would do is you get the registry, but you don't actually get all the associated deltas and updates to that Yeah. But when you are talking about deltas, you are talking about the way that was, was defining in the, but now in the, in Anoncrez, we have the status list, the revocation status list. So in my opinion, if you are going to, to ask for a non-revocation proof, it will be better if you provide a valid timestamp. But in order to not to. Now is a valid timestamp. Yeah. But, but if you, if you, if you create a proof request, you can ask the lecturer to, to see how, if, what's the, the, the, the most accurate timestamp according to the one you were wanting to, to have. So in that way you will, you will ask the, the, the prover for a timestamp that will be, I mean, it will be the, the, the, the last one valid at the time that you were. Yeah. It's, it's like putting, putting the effort now on the, on the verifier, right? But we're going to do that. We need to put that on the specification, Anoncrez. Maybe yes. Yeah. I think it will not be any confusion if, if you, if you do so. And also, so usually I, in the, in the practical case, usually the, the verifier is more powerful than the holder because the holder is usually a mobile device. So Yeah, but the holder also needs to query. So you, you won't release the, the holder to, to do the query. No, but all this stuff we have to do in a shade to, to provide the revocation of a rights, it's something that will, will add some overhead. Yeah, but the query to the lecture is going to be also, but the, yes, but there will be more queries to the lecture in the holder side. Because we have to, to do the, to do, because we have to, we have to do the, the, the regular one, let's say, and also the ones to, to provide the, the, this overrides to the Anoncrez library. That's the verifier side that needs to provide or overwrite. You're right. Yeah, you're right. And if we move it to the request side, basically it means you need to do the work then. And otherwise you need to do it at the verification process. And like you could already, at some point pre-cache those, like if you already know which is the latest, or if it won't change, like there's some way you could like pre-cache that and just say like provide overrides by default or something. But yeah, I think we, it doesn't incur extra queries from the ledger. It's just like a bit more logic to see like, right, which is, timestamp range is valid for which, timestamp from the ledger for the Deltas and the revocation status list. Yeah, yeah, yeah, you're right. It's the same actor who will need to, to do the, the effort. But another question I have is in the case of the, of in the SDK, because I'm ready, I'm planning to do most of the, of these changes on, on the un-on-creds or legacy in the proof format service before calling the, the, the, the very fair service. So in, in the case of, of the in the SDK, we have to, to throw down an error if, if we are providing an override or what should we do? Interesting, yeah. Ideally, we handle it in the un-on-creds or REST service. So it's not like specific to the format. But yeah, I think we should throw an error in that case, then in the, in the SDK one, or we should do the verification and we can, we can check it afterwards manually. Where if there's overrides, because in the SDK just wouldn't do, just won't do the validation. So it will actually succeed. Yeah. So in that case, it, it won't be an issue. And we could choose to in AFJ fix the issue from in the SDK if we want. Okay. Yeah. There be any concern with doing so in terms of like, that then introduces inconsistency between AFJ and ACCPI in relation to in the SDK? I'm not sure if that's an issue, but we would be introducing a difference in behavior. Yeah. Yeah, I think because, but I think because it has some implications on security, I wouldn't find that an issue. Probably ACCPI should also make the same change. So yeah. I think that's, yeah. I think I agree with you. I don't think it's an issue. The inconsistency is, is okay. But I'm thinking, I like the idea of such that request be accurate. So make sure that you have the from a correct or a valid from. There is a little bit of an issue with, with the request side being the correct time stamp. I see those that if as a verifier, what I want to say to the prover is I want to know if you're, if your driver's license was valid two weeks ago when you had your accident, right? It's a very different thing to display to the user. Oh, I want to actually know if your driver's license was valid five weeks ago when the last Delta was actually posted to the ledger. So I think that that could be solved with the non-credits 2.0 or you could provide a little bit more information with the non revocation where you say this is the time stamp from the ledger that I want to use. And this is the one I'm trying to verify as valid against, if that makes sense. But dealing with a non-credits V1, I think misses some of the nuance there. Okay, we're mostly there fixing small bugs here. The things we're working on right now for the discussion. So, okay, cool. Thanks everyone. And I'll see you next week or actually next week. I won't be here. But yeah, so if somebody else can host, please let me know, reach out to me. And then, yeah, you can host. And otherwise, I'll find somebody for someone from my team to host. But if somebody wants to host, please let me know. Thanks so good to see you.