 All right, welcome to the June 27th, 2023 Aries Cloud Agent, Python user group meeting. Big agenda, lots of things to go over. So we'll be talking about a pile of things, including probably the main topic getting an on-credits rust into Akapai. The meeting is being recorded. So we will be posting it after this. The meantime, a reminder that this is a Linux Foundation Hyperledger meeting. Linux Foundation meetings have an antitrust policy in place, as well as the Code of Conduct for Hyperledger is also in effect, please be good to one another. Anyone who want to introduce themselves that's new to the meeting and wants to talk about what they're doing, wants to make an announcement, or would like to add something to the agenda. Please raise your hand or just begin speaking, and we'll go in turn. All right, nothing to announce, nothing to. It is summer season getting started, so there's not going to be as many things going on. All right. Reminder about documentation I have an idea for some more documentation to talk about hopefully we have time we'll see. All right. On the agenda Akapai release 8 to RC one has been completed. It was just released a few minutes ago so it's out and available. It's just a minor update. There is one more PR that we're we're considering merging prior to the final 082. I'll wait a little bit longer but that may not make it so we'll see how that goes. But if we decide not to include that one as soon as we've had some smoke test done on this 082 RC one will complete 082 and release it. Any other PRs that anyone desperately wants into 082, if there's anything to highlight, please let me know, or and let us know now. Happy to have that. Oh, we have another AI assistant in the crowd I wondered about that. Oh, well, I guess we'll leave it. I'm embedding an on credits rust in Akapai I wanted to give an update we had a meeting yesterday on on where we are with BC governor DCO to talk about the code with us so I was going to talk about that. Daniel I had a presentation that I thought I'd go through this sort of summarize our meeting and please jump in if there's anything you see that's incorrect. Or needs adjustment but in the meantime I'll go I'll go through with that so we've had a goal for a little while now that we have an on credits dash RS so the hyper ledger and on credits implementation in Akapai. This talks about where we are and where we need to be. We posted a code with us and DCO has it a PR was awarded to a DCO and that team's been working very hard on that done a lot of work. The PR submitted for merging and we've decided to merge it into a developer branch it's not quite ready for merging into Maine. This will be, I think likely is a extremely likely to be a breaking change. And so we're talking about now exactly what that means and how how much of a breaking change will be the maintainers recommended to make it a breaking change that that we were probably holding on to too many things to try to make Akapai backwards compatible in certain areas and so it was time for a pretty significant change. Or if this is likely a place where a significant change would actually be helpful to everyone versus a hindrance. Obviously, we want to keep things as backward compatible as possible but just there comes a time where it's it is a bigger problem to try to maintain that versus moving forward with a breaking change. Here are the things that we're proposing that be breaking and non breaking as we go forward so we want to minimize the changes the issue credential and present group protocols so basically the interface between an issuing agents and a holder between a holder and a verifier we want to minimize those changes so we want the protocols to be to remain the same. So basically hold your nose and keep the name of Indie attachments. So even though it's Indie attachments versus and on credits we would we would retain that and basically cover over any difference in the use of legacy Indie adapters identifiers and all other identifiers so basically when using legacy Indie and where issuing credentials or present proof. So that would remain the same and so that if we interacted with a older version of occupy, there would be no difference and so that is the, that is a big interface that we want to keep the same. So that would remain. I think we, it would be best to drop the legacy and on credits admin API's and, and add. Oh, yeah, legacy and on credits admin API's and add new, whoops, sorry. And add new and on credits API's. So basically that means drop the scheme of the cred death and most of, and perhaps all of the revocation endpoints. So those would go away and we would add a new and on credits API. So basically instead of there being a slash schema slash create or a slash schema, a get slash schema, or a get slash credit after these would be an on credit slash schema and on credit slash credit death. So that would be the idea there so that would require the recoding of some of controllers, basically, to upgrade to the new release. So that's a big change adds an on credit and on credits API so adds those endpoints revocation I want to talk about a little bit separately. So we'll get to that in a bit. So the idea would be enable upgrading an existing by implementation storage to the new breaking change version so the idea would be that when a implementation of occupy with a controller is updated together. That there would be a way to upgrade the storage so that the the instance could function with the new release. In doing that there would be no special support for unlikely to be special support for a multi tenant occupy implementation. So, basically, you would not be able to do something like pick and choose which tenants get updated to the new storage. All controllers would be updated together for the to the new version. Storage for all wallets are updated. That means storage for all the wallets are updated during the deployment of a new version which means all controllers would have to be updated when a multi tenant updated. So the ramification of that is, if you had a multi tenant instance that had different controllers. For the different wallets. All of the controllers would have to be updated together. Not individual ones. If, if you're trying your instance all use the same software. The controller was the same that that's easy and and you know no different from from a single single tenant version. If you actually had separate controllers for different tenants, all of them would have to all of the controllers would have to be updated together does that make sense does anyone have any questions on that. I don't know how many people in the community would have that issue, but be aware of that. We do want to retain the endorser implementation. As much as possible hide endorsers from the controller in that, in that the controller doesn't have to do anything that endorsing would take place within acopi and the endorser would just basically be configured for an agent to say I'm going to use an endorser here's the endorser I'm going to use and stuff would just happen and the controller would not be concerned with it. So that should happen within acopi. Revocation. I wanted to get into a bit of the history of revocation for those not aware of it and then the plan for going forward. Luckily, the first the first implementation of of revocation required the controller to do all of the work. So whenever revocation was needed. Support was added in the credit app then every time a new revocation registry was needed it, the controller would have to say hey go create a revocation registry when when we wanted to publish a new revocation registry entries the status updates that publishing had to happen the controller had to manage that hardest yet, even of those things was tracking when a new revocation registry was needed. And so, and that was the big one that sort of triggered to us oh my this this was not going to work. This is too painful for the controller to be able to do that. So, we moved away from that. In addition, when endorser functionality came in that also all had to be done. Essentially, the controller was doing it and again that was difficult to manage for the controller. So what we want to do is where we are today is we want to continue that which is when we did the second implementation of revocation. So, ACAPI does the work and the controller only does the things that trigger ACAPI to do things so that the impacts of revocation should be. I'm going to support revocation during the credential issue and setup so every time I create a credit def or a new credential type. I'm going to say whether I'm going to support revocation or not. If I am supporting revocation I'm going to say how big my revocation registries are going to be. And that's it. The controller needs to track the revocation ID for credentials when issued so the identifier that is necessary for when a credential is to be revoked. When necessary obviously the controller needs to revoke credentials and they would use that revocation ID to do so. When they do that ACAPI is tracking the state of the revoked credentials the fact that they've been revoked. And finally, the ACAPI instance the controller needs to publish trigger when publishers publications of revocations are done. And again, this is on a per credential type so it's for say up I want to on a on a credential type I want to revoke I want to publish the revocations and that could result in needing to publish multiple you may have multiple revocation registries that that have been active at various times, and they have a backlog of revocations to publish that means that could be multiple transactions to be written. All of that all of that detail beyond these four trigger points is all managed within ACAPI and we want to keep that so the controller's life is simplified to supporting what's necessary for revocation. Right now, when we implemented that second implementation we kept the admin API for both modes. So we kept all of the revocation endpoints so that the controller could manage everything itself. Probably that wasn't the best idea. The plan now is to only keep the ACAPI does all the work mode. It's much easier for everyone for for ACAPI and for the controller to just say ACAPI manages the revocation work and the only input the controller has are these types of are these actions that can be taken. I would point out that these activities are the same, whether you're using an on credits or whether you're using something like status list 2021 and some other revocation mechanism. So this actually could work for any credential format which is kind of why I was suggesting that the revocation endpoint may actually still exist. And because it is generally useful basically the only two things you do in the revocation endpoint is you revoke credentials, given an ID revoke credentials and from time to time publish revocation updates, publish them out to some other place. That really could be done, regardless of what revocation scheme or even what credential format you're using so I don't know if we're going to get to that generic but that that is something to consider and and think about as we go forward with this. And then there's a bunch of cleanup to be done on the PR 2276 which is what was emerged yesterday into a developer branch. There's some issues that were raised into the and on credits rust implementation for a couple of things. No effort has been made to eliminate any no longer needed admin APIs and endpoints. So I think it would be wise to do that so at least the elimination of the endpoints and then look at what code needs to get eliminated as part of that. We do need migration plan for issuers holders and verifiers which is the migration storage scripts. Depending on, you know, what the new storage looks like what what needs to change what additional data needs to be tracked or generated when doing an update. And finally, the DCO team has got a pretty significant all in one test that is being used but that is sort of a standalone test for all of the work so that test needs to be added to the core integration tests so that, at least that one specific test and probably other tests need to be added I think those are the major elements. There's a couple of more. Oh, big one. That is the big one. Come on. Accompany does it all revocation so we want to get rid of the endpoints that allows a controller to manage the revocation all on its own, get rid of that. To simplify, ideally the endorser support some sort of plug in that sort of says, oh, I'm going to call endorser, I'm going to call the endorser on pretty much every publication transaction and if I have an endorser, it will be used and if I don't. I'm going to call a null off of some kind. So, put that in. And then the other thing is, in addition to legacy indie support support for did indie and did web implementations did indie would be used for instances of Indie that are able to support did Indie. Indie VR capability and and implementations and support that and then did web should be a pretty trivial implementation which is basically a transformation. Basically being able to do posts. To URLs based on what the did web identifier is that that basically results in files being loaded to a web server. So there will be an implementation of some sort of registrar for for did web. And basically that would need authentication to be able to post a file, you know, to a to a web server, and then the resolver that simply converts a did web. Did did URL into a HTTP URL. So that's the cleanup. And I think that's all. Nope, looks like. Yes. Areas RFC changes, the Indie attachment I mentioned that earlier. We will retain the Indie attachment to be at least the Indie were Indie changes to an on credits. So this needs to be added to the areas RFC. So that we sort of deprecate the Indie attachment and add the an on credits. The interesting thing is that don't support an on credits format on receiving an attachment in the on credits format would send a problem reports and hey I can't handle this. Likely it would be fairly easy to say, Oh, I got this an on credits attachment but that's the same as the Indie one so I can use that and vice versa. And I think that's that the an on credits format could also be the W3C format attachment. And so if we were to complete the formalization of the an on credits data integrity proof. We could actually use the existing W3C format that is used for JSON LD. We could use that in use that for an on credits so that's another possibility that is kind of interesting. And I think that's all I had, I didn't see hands raised or questions so there we go. Tim, I can't hear you Tim go ahead. Thanks my house is that better. There we go. Yeah. I'm really excited about the revocation changes on experience and trying to work with issuers we gave up trying to explain to them how to manage their location registry so we've actually pretty much concluded we needed to write something ourselves to manage it so this is. That's really good. Just one real quick note you had renovation in the title not revocation. I will fix that. But anyway, really good. Thank you. That's interesting. So we've. So you were actually trying to use the controller doing the revocation like all the pieces. Yeah, we tried basically in one of our pilots we had the issue and we have no for the early tests we were managing themselves then we try to get them to write in the controller and they just push back so heavily. Yeah, yeah, it's not our business. Why are we doing this and I agreed. So we kind of we dropped it. Yeah, yeah. We do have this fully implemented and we use it all the time. So, you know, this idea that acropi handles at all is shouldn't, you know, is implemented and should make it a lot easier. We even have a feature we ran into a situation. Those may not be aware we ran into a situation where a wallet got out of sync with the ledger as far as what was revoked and what wasn't. And we were even able to put in a correction that allowed us to re synchronize. So basically, if, if the, if a rev, a publication of a revocation registry failed. We would actually detect that and correct the revocation so that would took a little bit of work took a little bit understanding. But we got it so that we were able to do that. So that would, I believe that would continue to be a feature and the goal of this. Any other questions or comments and Daniel, how did I do. I think that was a really good summary. I don't know. Now's a great time to talk about it, but just because it was fresh on my mind. That last point that you raised talking about the non creds in W3C format and using the LD proof VC detail. Format. So I actually happened to do an evaluation recently on on what it would look like for an on credits to be in W3C format and be exchanged over the existing issue credential and present proof data calls. And the LD proof VC detail attachment format is pretty specific to linked data proofs. So my conclusion as I was going through that was actually that we would probably need a different attachment format in order to communicate about the non cred specific credential and all the things that factor into that. So, be interested in how you came to that conclusion because our conclusion is the opposite. That's funny. Okay, I have a document I can probably share with you and, and you did look over Andrew's work that he did in that. Yeah, so it wasn't so much that there was any issue with the actual format of the credential itself or for the presentation itself, but rather, how we expressed as an issuer. What was to be issued using the only proof VC detail attachment format, it was just, it was a little bit of a, a contortion I suppose to. Okay. Very interested in that. Yeah. And aside from that, I also wanted to mention that I have created a project board for some of those remaining items that were discussed in Steven's presentation. I've been going through and populating that board with a bunch of stuff. Yeah, and we'll continue to do that and add some more details to those tasks. So that's out there and I think it's to it from the agenda. Oh good. Okay, and I'll add my presentation renovation and all to the agenda as well I keep forgetting to do that when I do these presentations so. Okay, any other comments from anyone. If anyone is interested in this is a chunk of work that needs to get done. We could definitely use others joining in and helping out with this the reason we put it into a developer branch of branch off main was to enable multiple developers to be able to work on it. Do merge requests pull requests into that. Into that new branch in preparation for it going into main we do. I'm a huge fan of not having developer branches and doing development on the main branch so I do not like to see that active separate branch so we could really use some help at getting that completed and getting that work done so. If anyone has is willing to help out on that we would very much appreciate it. Okay, whoa, where did I go. That was unexpected. Oh, that's why I keep forgetting when I leave it in that mode. Okay. Let me get back to the agenda. I'll probably talk a bit about this tomorrow on the areas working group call I'm not sure updating areas mediator service given the open sourcing of DCO socket doc. My plan was to put in some some issues into areas media mediator service. And to talk about how in DCO socket doc which they announced last week and open source last week could be used. I think it's a huge improvement in how that's going to how an areas mediator can be implemented so like to see that done. I'm assuming that, you know, in the in DCO socket doc picture they just put mediator cluster. That could still be an acupy cluster and so the goal would be to put that in place I think we recently done work to enable the use of Redis and cash consistency using Redis in acupy such that we can have horizontally scalable instances in cases where there is no Web sockets involved by taking Web sockets out of the picture putting them into socket doc that we can have a scalable areas acupy mediator client once we add the connection with the HTTP based socket doc. So that, you know, is is going to be a key goal so those interested in working on that. Please, you know, raise your hands as I say I think we need to get a lines and boxes picture of what it's going to look like but I think with socket doc in place it's much much easier to have areas mediator service implemented. What other thing we want to do and I realized that there's a little bit of work to go in there but we've had a couple of our developers that work for companies that do not allow the use of n grok on their systems. Oh, Jamie. Sorry, go ahead. Oh, it's okay I just just raised it now. I happen to be working in this space like right now. Not to sort of the Redis area like because I'm trying to set up a Web socket and get rid of the, you know, pulling feature. So I would be kind of interested in. Okay, the date in this, in this spot. Yes, I think. Yeah socket doc in managing Web sockets is going to make a huge difference so definitely if you're working there let's let's get you involved in that. Thanks. Good. N grok. So we use n grok in a lot of issue in a lot of developer demos and developer setup so that a developer can have a local instance of occupy and expose a public interface to it, a public endpoint so that other agents like wallets and things can talk to it. We use n grok for that. And one interesting side issue if anyone knows of, you know, people looking for a getting started project this would be a good one which is to actually use occupy as a mediator client and to have the mediator be the endpoint. So use the for example and dco public mediator that is, you know, a sandbox mediator that you can use, configure your occupy agent to use it and and then be able to basically be able to use that as as an as the endpoint for your occupy agent it would be public. By default all your messages would come in through the mediator, and you would not have to use and rock anymore so it is something that I kind of have a goal that we would eliminate the need for n grok for that purpose so keep that one in mind. As we go from what I understand occupy is not does not have the I believe it's the pickup to algorithm Daniel, if you could correct me on that as a mediator client so that would have to be updated as far as I know. That would be correct. So as a, a quick kind of follow up to the question of in graph and using occupy as mediator client. So, quite a while ago actually, we threw together a quick project that basically puts another mediator into the mix, which sounds complex but it actually ended up being cleaner at that point in time at least first implemented that way as opposed to turning into a mediation client so that that mediator would sit alongside your occupy instance within your firewall within your network, whatever. And that component would pull and receive messages from a mediator that is outside of your firewall so a public mediator and then forward those messages on to the occupy instance. So, it turned occupy into being able to be capable of being a mediation client without needing to implement it directly within occupy, and we've been using it for quite a while, and it's been pretty reliable and useful for us so that's an option. I still, I still think there's value in in occupy directly supporting that capability, but I think there's also the question of whether we want to have that complexity into occupy and that whole question. So, yeah, I think there's reasonable debate to be had there on the best approaches. Okay, I'm not sure why you would need that because if you know if you've got an external mediator all you've got to do as far as I know is is configure a startup parameter so that occupy uses that external mediator. The issue that we ran into at the time was in order for occupy to hold a Web socket open to the external mediator or to pull from messages there was going to be an amount of work required in the inbound transport, which. Yeah, again, at the time we determined it would be quicker for us to just kind of thing that to care that for us. Okay. Yeah, I mean I, the issue is more and rock and Tim's raised that there's another diode.io the problem with any of these things is basically and rock or or equivalence creates a hole in the firewall and companies don't like it when arbitrary holes can be punched into the firewall of their organization. So, that's why, you know, using a mediator basically just uses HTTP and eliminates the need for for a Web socket so that would be the aim for that or sorry eliminates the need for punching the hole in the firewall. I don't know if, if, if how, how that affects things so yeah. And agree again bring your own device that is what most of us have been doing but some people on our team are not able to do that so there you go. All right. Let me check here. Where's the last Jason Syrah talk you are here. Excellent. Progress on did pier two and three in acobyte screen. Yeah, I don't need to share my screen now but I can just put a quick update. Yeah, so look at that. I'm going to use two and three specs. I'm starting with just did pure to Sikpa labs has a Python period library that was not around the first time change out with this. So, I've been looking at a great with that which has been a great resource to deal with some of the number crunching in the document construction itself. So that's a good thing for the community to be aware of. And similarly was some conflicts between what any did could be and it did peer spec, the, the actual reject says defined by the specification were not compatible. So after some discussion and some updates. The root of that issue is that did pure twos can have service entries, which are base 64 encoded JSON. You may end up with some paddings, which is usually an equal sign. Well, equal sign is not a legal character under the did spec. So that was causing some confusion, however, resolve that and the, the guidance and instructions going forward is that if you are basically for encoding something you will need to strip equal signs off of that. Another similar reason to do this and we think that Andrew Whitehead at some point brought up is that you have one did with an equal sign or one did without an equal sign. Those are the same did we shouldn't have to did that look different but resolved the same things. So for those two reasons. So that's something just to be aware of as well. I don't know about any other did specs but certainly did peer to includes the service entry as basic people in coding. So yeah, God, again, and I'm looking to build in additional did resolvers and understand really this is my first time really cracked me open Accom by an understanding how connections are truly established so I've been getting in there and creating did did to your twos now, but now I've got to figure out how to how to leverage them into the payload and actually get them to be the primary way that these connections are established so still some progress to be done there but that's kind of an update for the community about some things that have come to light and where we're going with my development. Any questions about that. Okay, there's an update I'll pass back to Steven. Okay, thanks Sarah. Good stuff that's super important as we get into now that 040 of AFJ is released. That's now going 040 is going into bifold and therefore into various wallets. So it is using peer did instead of unqualified dids for did com communication did com messaging so super important to get that move forward so that's good stuff. Okay. So today I've had this week that I wanted to share with others and see if anyone was interested in doing a quick quick and dirty project on this. One of the most painful parts of using occupy is the startup parameters. There's so many for good and bad reasons there's lots of them. Being able to understand what parameters are available when you should use them, which ones you should use and making them easy to use I think would be handled nicely. Rather than having the documentation we have today which is basically, you know, spit out all of the options and then try to figure out which ones you need would be to have an actual editor, not not a yaml editor which, you know, there's lots of generic yaml editors but that doesn't really help with the domain issue we have of of how do you understand all the parameters and how they relate to one another and which ones you might or might not want to use in anyone in any one scenario so I actually had done a project a while ago that used survey JS so I was thinking that would be a way to do it. For those not familiar with survey JS. It is basically a way, a nice JSON and user and UI development tool for creating libraries so you can have a library like this. Where you just add in questions so it's basically using it is a lot like using Google forms you just add those in. When you edit those, what you're really doing is creating a JSON.js so you're basically just building up these elements. So, one could see that it would be really easy to take all of the, you know, 108 yaml parameters or startup parameters that we have, and simply first of all generate them into one of these and then go through one time to adjust them to have the help text for them have that you're sorry generate all the help text into them, generate the right data types group them, provide additional detail so I think we could have a one time effort to go through in essentially using a Google forms and creating something like this. So we would have generate half manually done the manual part would be important because that's really providing the domain knowledge in there. To do that, that creates a nice package that you know the JSON.js that you saw there is is an active component so that would be part of the tool, and then have a little bit of JavaScript code to initialize or load settings. So, a YAML file. Put it into the format for the job survey JS format component so that a user could use that survey JS component actively to edit their, their settings and then a process for taking it out of the survey JS format and into into YAML to to save it so I think that would be relatively And then to maintain the tool over time. Basically, a GitHub action that would monitor for changes in the startup parameters basically maintain a file within the repo that takes the minus minus health output of the configuration data. And then a, a GitHub action that runs a diff with that generates the latest output diffs it with the file we have and creates an issue when the outputs differ. An issue with then bring her somebody to manually go in and adjust the components here of the editor. And so it would be once the survey JS component or whatever we use gets created it would be bringing it over time I think would be pretty simple that these things do not change significantly over time so it would be a periodic you know as we do a release, adjust and add whatever parameters were necessary to it. So an idea there. Triggered some interest I would be very glad to help out if somebody else, I would do this part of it and was interested enough to do this part of it. I could definitely help out with the, the true documentation parts of this which is actually generating a list and making it a useful survey JS component that we can do over time. Any thoughts comments does anyone know of other tools like this that would be better to use. And he experienced in using these types of tools in the past. All right then. I don't think we've only got a few PRs. We really want to get eight to done. Let me take a quick look at the PRs we have. That wasn't good and I probably shouldn't. Let me just go to there. Would like somebody to review the read me and every me updates that I've done pretty trivial. We've got the depth container we're holding that till after the eight to release. This is the big one that I'm wondering is any no progress has been made on this one. We really need somebody to that has a little bit of time to look at this one any comments on it. I left this as a comment already in the yard so but I did take some time to look through it tried out a couple things still continuing to have the issue with the actions timing up. So I don't really have much in the way of real updates or progress here but I was able to at least get some time to look at it. So based on pure did library that was written that I've been leveraging does not support three six so that will those two related but again I don't expect a conflict but just that is something to know. This is a blocking change as far as I'm considering and probably the highest priority PR we have. You want me to give a summary of sort of the testing that's been done so far. Okay, so basically what it is is there's the BDD tests that get run the integration tests that get run. And what ends up happening is there's one in particular that triggers it quite a bit I've got links to all of the branches and everything that I've that I've worked on as well as the the runs. So basically what happens is an agent starts up mediator starts up, and then the Bob agent starts up and then it tries to start the Bob mediator. And it just never does like nothing happens and the process ends up timing out and everything ends up either locking up or failing. I've tried exactly what Daniel did tried extending the timeouts that doesn't make a difference extended out to 10 minutes. So it's not a timing issue. What I did do is I tried different versions of Python so a tested in GitHub actions and locally using three six three seven three eight and three nine. This issue starts happening in GitHub actions at three eight, where if I run it locally it works fine all the way up to three nine so I can't reproduce it locally. So it's very difficult to debug. I created a different branch that does nothing more than starts the mediator that can't start up with the parameters that are used by the test and it starts up just fine so I don't understand I really don't understand why it's not starting up in in the process of the tests. And the funny thing is, even it using Python three nine some of the integration test work so it seems to work when you have, you know, the one that runs before it is one me one agent one mediator and a Bob agent communicating that one works fine but as soon as you add that extra container it seems to just not work, and I'd no no idea why. Any thoughts or many one on that. It really use help on this one. Hey. Thanks way thanks Daniel. This one is held up by the three six one period will be. And then the rest I think are pretty much that we want to hold off until eight to goes out. The big one we're waiting on that we had been waiting on was this one updates on this one. Thanks a lot. Do you have an update. So, the implementation is done so the confusion. I had yesterday that the audience with Andrew get got past that implementation is basically the test cases unit test is what I'm working on right now. It's going to be pretty soon. I think, maybe before the, the stand up, maybe I can push the changes. Is it worth pushing this into a two or is this crucial to a two or should we just put this as the first one, you know assuming it's ready very very soon assuming it's ready today do we want to just, just because it is. We've taken several iterations. It seems safer to put this into the next version. Is there objections from anyone on on saying that we're not going to include this one in a two, and we'll just put it in the next version comments. The only thing is the changes are already in traction that depends on this to be merged in. I think that's the only dependency. I can see and regarding the iteration I think this would be the final because I don't see any more iterations after this. Okay. All right. I think that's all we have for today any other topics anyone wants to raise today. All right. Thanks for joining. Have a great Tuesday or whatever's left of it where you are. Thank you so much. Take care folks back.