 Alright, welcome to the February 20th 2024 areas Cloud Agent Python user group meeting tons of topics on the menu today, mostly status reporting or a bunch of status reporting. I want to talk a bit about upgrading. Unfortunately, the person that's been looking at it couldn't make today's meeting, but we'll go through what he's put in. So upgrading occupied employment for an on credits RS. Also a checklist on the 1.0. And also want to talk about mentorships and see if anyone's interested in being a mentor in the hyper hyper ledger mentorship program. So that's the topics we have on the agenda today reminder. This is a Linux foundation meeting. So the Linux foundation anti trust policy is in effect. This is also a hyper ledger foundation meeting. So the hyper ledger code of conduct is in effect. As far as announcements, I've got a couple and then anyone who wants to can raise hand or just come off you and. Say a few words. The Aries annual report PR has been submitted to the hyper ledger technical oversight committee. Comments are still welcome. The PR has not been merged. So if you do want to take a look at it, the link is in the notes and I can put the link to that in chat. And then I should be in edit mode. So I'll switch over to edit mode. After doing that. Okay, um, yeah, the annual report has been submitted. It's in PR form. The meeting. The report will be discussed this coming Thursday, February 27th at 7 o'clock Pacific. For the afternoon in Central Europe. And here's a link to the meeting notice. Everyone's invited to attend the. I've we did a non credits and indie last week. It was very useful. We're doing other ones. All the projects in the first quarter and they've been quite interesting so. Do attend. Hypernature mentorship program is coming up. We got really great results last year in the hypernature and on non credits project. We'll talk about that in a bit. But that's they're now accepting proposals through March 15. So coming up. And then also I I W is April 16th to 18th. And I'm sure a few people from this group is going. I'm going to be going. So look forward to seeing folks there. Any other announcements or anyone want to introduce themselves and talk about what you're doing in the. There is cloud is your Python community. All right then. Feel free to add your name to the attendees list. So status updates and on credits RS and ACA pie. I know. Endorser progress has been good in the last little while Jamie any updates on that progress. Is it finished. I think it's finished for single tendency. The next thing is multi tendency. I know Ian put a ticket up for. There's different. Different stuff you can do with multi tendency in the wallet that needs to be looked at as well. It's not something I would have expected would have been. As far as I understood only authors are supported in multi tendency mode because we haven't been able to do a multi tenant endorser as far as I know. Do you have a summary of the type of issues that are different between single and multi. Well I just looked at Ian's issue yesterday but for mine you just you just have to make sure like the rate. The right wallets initiating and on creds and. Not that men wallet. I don't think it will be too much work but. Offhand do you know could you give me the the issue number. I should check it before. I have to look just to see if you could. While you while James doing that. I think the. We're going to talk about updating upgrading to support the transformation into an on creds RS support. The last step and really what we're the whole goal here. The whole important reason for updating to an on creds RS in addition to it being the current library that that we want to support and use. Is ledger agnostic and on credits which is that we're not tied to Indy but rather the important thing is we want to be able to use. An on creds regardless of where the data gets stored and so we want to be able to support did web and we want to be able to support Hedera which is. A community that is interested in using occupy to. Root and on credits verifiable credentials in in their network checked we want to be able to support and so on so the whole goal of what we've been working on with an on credits is to get to ledger agnostic. And that's enabled first of all with an on credits RS and then the use of an on credits RS. So the next thing up is implementing the methods registry and making sure we have the document. We have examples of what a methods and an on credits method risk registry is and then documentation on how to add your own. And so that's the I believe the final step we really have to do. We really need an example and implementation and then the documents to cover how to add your own. It looks like there is a possibility that the Hedera folks may be willing to be help us out with that and so that would be the goal that's something I'd like to do but in the meantime my plan this week is to have some meetings on both the update process and this registry. Okay, Jamie, you got a number. Yeah, so there's a issue to seven six seven. That's for the multi tendency endorsement of all the objects that get written to the ledger and then. Yeah, Ian just opened a ticket yesterday to seven nine two for saying it doesn't work in a specific scenario. Oh, right. That's that those are it I think for. What needs what's left code wise for this. Okay, thanks, Jamie. Okay, um, I don't know if anyone is here from the what's cooking team and Ian's not who's helping out with that from an oversight and guidance perspective, but things are progressing and other. Uh, set of commits were done in there. Nearly there with the demo so I'm hoping for a demo in the next in the next week. Of demonstrating the occupied pieces so that we can tune and and. Sand off any rough rough parts of that implementation and it'll be done. So that's the goal. I know. On the credo front, they're making good progress there. They've got a demo running as I mentioned last time and have made continue to make evolutionary progress on finalizing that. Um, did peer and AFJ in a raw. Not much to mention there as as mentioned last time a th the tester implementer for. Did peer one two and four we've got functions functionality in acopi to both receive did peer one two and four and to be met. Did did peer two and four. I can't recall if right now if 2748's been released or not yet. I know Daniel was working on it. By the way, I should mention Daniel bloom and his wife had a first child recently. I think a day or two before expected so Daniel was a little. Cut off guard and I hear everything went very well and and everyone's well and home and so that was fantastic to hear. So he's taken some. Parental time off, which is awesome. So he's will be taking a look at that. Sheldon any updates on getting credo working with a th in this area. Nothing really major to talk about still working on the dev containers for AFJ so I can expand the AFJ back channel. Okay. And next I'm not actually seeing this up here, but a key if you're picking up the did rotation work. Correct. I'm going to be. Moving through that today. Okay, so yeah and started yet moving to it now. Yep. Okay. Low generator. I mentioned that last time we've done some additional testing that actually took it back to the controller front and added connection. So the bottom line is still we were able to with a controller in front doing pure issuances get to about 310 issuances per minute across the sustained period. Once we did a connection and an issuance per loop that obviously slowed down, but we really didn't investigate why because it still had plenty of capacity for what we wanted. So things are looking good. We are trying to get a pure verification test done where we just hammer a verifier as much as we can. So the idea is that the accreted test would would be a on startup, make a connection and get a credential issued. And then in the loop, once we get into the loop, we just simply do verifications as fast as we can on all the agents. So we expect to have work done on that over the next two weeks to get that done just so we can see how fast verifications can be done. Ideally, first cut of that will be done with a traction verifier just because that work is already known to work. The second part of it will be if we do decide to move forward, we would do that with VC off then and so that we're basically doing a web request, essentially a curl to VC off then to get a presentation request, a connectionless presentation request and then doing the submission of the presentation via did calm to the to the verifier. So lots of good progress and lots of happiness as far as how the performance is working. So to take this off work is complete on the DRPC. For those interested work is going on in credo to get that done on on the AFJ side and then into bifold so that the purpose for doing this which is attestation can complete. I left this in I meant to remove this this did get completed and he ended a bunch of investigation found where the issue was and was able to fix it not only for issue credential v one but across the board for all protocols basically made a slight tweak to where thing where events occurred so that we don't get into this race condition so that's good. Okay. Next topic is updating to an on credits RS. So. Ian's put in two PRs one about upgrading a single wallet one about upgrading multi tenancy. In those he covered a proposed process and and I think we want to look at that process and see if it can be done in different ways. So I am kind of pushing towards us doing it slightly differently but I don't know if it's a good idea so that's why I wanted to have this conversation that's unfortunate that he is not here but I'd like the community to be aware of it. And to weigh in as much as possible to guide us in this. Excuse me. The data to be upgraded is revocation objects the schemas and credit deaths and the private keys. I think the previous upgrade that was done from the NDS to K to ask are there's way less data and basically all we're doing is iterating through a set of records and updating those records to to move things around as opposed to fundamental changes in them so there's way less data. There's way less work to be done. Ian's proposal is, we have an upgrade script to migrate from Ascar in the SDK to ask are we we use that as the model. And we use the similar idea. Now, the core concept of that is is this idea that you core idea is that you bring down the entire deployment of occupy you run an upgrade in a single process and then you start restart the deployment. And, and, and so that that's basically the idea. I'm wondering if if we can do it in a whether that's the best way to do it or whether we should look at doing it in a way where we bring up a new controller, migrate all of the options and then and then allow the allow things to carry on forward so that we don't have that process where we take everything down upgrade and move forward now wait. This is where where you should weigh in and where anyone who has operated a production deployment needs to needs to have their have their say in is that the easiest way to do things is that a relatively easy way to upgrade is is it forces downtime but from a practical use is that a good way to do things. I guess it does allow you to back up the database so that if anything goes wrong you've got to restore capability. Yeah, there's a process that we have when we're doing the indie to ask our migrations. So, it involves a couple different steps one of them is scaling the the occupy instances down to ensure that there's no changes to the database or no access to the database when the when you doing the backup and migration. Yeah. And I just remembered why as I was sitting there saying all those things I'm thinking well that's an obvious way to do it that's easy. The big problem is multi tenancy and I that's the baby. In this release. We've got to upgrade the controllers. There's no avoiding that. And if we're a traction type deployment. It has to be done in sync and that is a lot harder. You really like to be able to have each each tenant upgrade on their own and in order to do that you basically have to let the controller do the upgrade as opposed to the the operator of the instance. I do not have a choice as I think about it. So, we really we may need to really investigate a way to do it on a have the controller operate the the upgrade execute the upgrade. Okay, if the controller was to execute the upgrade that would be pretty complicated. Think about what that would be because there's, you know, this goes back to our history way. You and I back to the old justice days where where we needed to upgrade all of the different projects that we're using a common server. And how impossible it was to get everyone to sync up at the same time. And I think with an instance like traction. I don't think we have any choice. Yeah, we'll definitely have to have a conversation about it. Okay. Anyone else. Operating a multi tenant instance that wants to participate in that and can can help us with with their experience. Okay. All right. So that's going to be the big question we'll talk about that just down here side issue. We have two repositories areas occupied tools and areas occupied controllers. Would it be what do people think about the idea of extending the definition of what goes into the plugins repository to include areas occupied tools and areas occupied controllers areas occupied tools is right now only is the Indie SDK to ask our script that all is in there. In fact, it's at the root of the repository which is a little odd considering it's called tools because it's the only tool in there but we just for expediency let it go with that. As soon as we do this ask our to ask our and on credits and if we add it as an upgrade path into occupy tools then we'll have a second areas occupied controllers is example controllers. Would it make sense to move those into the plugins repository as not Python plugins but plugins to occupy things you would add to occupy. Are we too far down the path of what plugins is to to allow for that. I'm kind of worried about the repo just getting like too big to manage like right now it kind of has a specific thing that it's trying to manage but I'll throw an opposite lens on it. Like controllers, it would actually make it easier to then keep the controllers up to date because right now like because it's separate and I don't think a lot of people are keeping track of it. Sometimes like depend upon updates and things like that are like become a bit of a challenge. Yeah, realistically the controllers don't work outside of the context of occupied so they have to be. It makes sense for them to be somewhere where they're like co located just from a maintenance perspective but I totally can see where Jamie's like. Coming from to. I mean this is these are not huge. So we've got basically two demos. The Alice favor is got three controllers in three different flavors. Whereas and then we've got this fourth flavor which is attraction controller. So that's what in those and as I mentioned in the tools is simply the conversion script as it is today a migration script from acobi India's to K format to areas as car. Depending on what we do with the next step we could be moving to that. Emiliano. I think a second. Jamie's thought there. I would be concerned about like the both the repository becoming too big and unfocused to maintain and also going on the unfocused team like. I don't know at demo controller versus a plugin that in theory is supported to be in production seems to be fairly different. A fairly different objective. So like I don't know that there's like it would be consistent enough to have everything in the same place I I hear what. What I keep says about maintenance, but if we're having maintenance now, I don't think we're going to have like any easier time. No, no. The code just because it's in the same place is like it. I would be very cautious with that personally. Okay. We end up having like a mega repo with with sub context folders if anything. And we already have that in the plugins because each plugin has its own context folder and whatnot so it creates more complexity. Okay. Some to think about if people could noodle on that. You know, obviously the name plugins becomes becomes less less interesting. You want to rename it to, you know, occupy extensions repository or something like that or store occupy store where this is where you get all of the all of the things. In addition to occupy so that we we limit it down to those but that's the idea of whether we want to have multiple repos for those doesn't really hurt to have multiple repos. And it doesn't change the maintenance of it. It just means we have less places to go for those who are interested in these things. Okay, let's put that aside. Some of the things that Ian raises in the upgrade issues concerns with in-flight potential exchanges. Do we allow those? How do we deal with when we shut down an instance dealing with in-flight potential exchanges? Obviously, again, that is more of an issue. Any any production instance would try to pick a time where either those are non-existent or unlikely to happen, but that all depends on the global nature of the instance. Upgrades will not include going from one database type to another so SQL to program Postgres or something like that. That is a separate process and not anything we would do as part of this concept. And then, as I mentioned, the multi-tenant upgrades is the biggest thing. I'm wondering if we can avoid a database update because we're not actually changing the database structure in any way. All we're doing is changing data within records within a tenant. We're not even adding records or deleting records. We're simply processing a record, reading it in in one format and saving it in another such that there are different values. The big thing is the controllers have to be updated in sync with the wallet. Ian is proposing that the upgrade actually block the use of the old endpoints. So once we upgrade, the older endpoints would not be usable. They would just return a 404 or something like that, not available. So this is required because you get a kind of a mess if you will in the records because if you can still use the old endpoints, you wind up with old style records along with the new and you've got to rerun the upgrade. So basically you pretty much have to block the use of it so that when we do upgrade, we have to more or less block the execution. So probably that means adding code to the endpoints that looks for a piece of data in the wallet or something that is checked on startup and then blocks those endpoints. Ideally, we can dynamically remove those endpoints from the swagger so it cuts down on the size of the open API and eventually we can entirely drop those endpoints from the opening API. Question would be whether we want to do that as data in the wallet so that we say, oh, there's some setting, some global setting in the wallet and we have precedent for that already that is checked on startup of an instance and results in the blocking of those endpoints or do we use a startup parameter for the same purpose? Again, we basically need some sort of global value that says don't use the old endpoints. Multitenancy is the issue I brought up before, which is we really need to be able to upgrade on a pertinent basis. If there is only one type of controller that is operating many tenants but they're all running the same code, you can kind of get away with it. If you have very few tenants but they're all running independent code but you can coordinate, maybe you can get away with it. But ultimately, we are going to have in scenarios like traction, every tenant is independently running their own controller. They're all projects that are working it and doing it all at once upgrade is borders on impossible, extremely difficult, even if there's three tenants, it would be almost impossible. So I do need to get this confirmed with the end that we're not changing the database structure. I'm pretty sure all we're doing is changing the JSON content, in which case it does seem feasible to me that we could have a way to essentially instead of bringing down all the instances, pause the execution of them such that we could run a single task to upgrade all of the records before continuing. So how do we do that with multiple tenants such that we don't force them all to implement? So I did a little bit of brainstorming, we really didn't think it through enough. So I ask people to think about this. How do we implement some sort of mechanism that holds all processing of a deployment of ACAPI instances while the upgrade is in progress? Basically some sort of what essentially comes down to putting a lock on the application so that the first instance that starts processing them processes all of them while the upgrade is in process before moving on to doing any work. A big thing maybe, maybe it doesn't matter if multiple processes upgrade them all as long as regardless of how many process and the results is the same, in which case on startup an instance could process all of the records and complete the upgrade. And then eventually we just don't do that once it's done, but that might be a way to do it. We definitely don't want to put a lock in the data in the wallet that has to be checked continuously just in case an upgrade might have been triggered. So we need it to be some sort of I would think startup happen on startup only and then the data says either nobody has started an upgrade. So grab a lock and do the upgrade or the lock is being held. So hold off until it completes or oh the process has already completed so you can just go forward and don't worry about it. So something like that has to be done. So I was just brainstorming on that. Anyone have any thoughts on precedence where this has been done before and is there a known pattern for doing this? Where you've got multi instances of a thing like Acropy running a single database and you want to have some sort of semaphore some sort of lock in place. I guess at startup of an instance is fine so that all of the instances sort of get restarted with the new controllers. And none of them proceed until the controller and or Acropy can't proceed until the upgrade is done. So this is the conversation we'll have to have this week. I welcome anyone who wants to join that. Let me know but certainly on the once Ian's back and available we're going to have to have that conversation and come up with a design that will work. But I'm pretty sure it can't be this model just because of multi tenancy. It's just not viable. And so as a result each controller would upgrade its own records. And and that would be it wouldn't be done as a as a stop everything but would just be an upgrade process on the records. Finally what we absolutely need is for this upgrade to be successful we have to have documentation for the API endpoints and notably what is a controller have to do to upgrade from the old set of endpoints to the new set of endpoints. So we're going to need an example implementation of that carrying out what's necessary and then documenting exactly what is necessary. My hope is that it's not a lot of work basically finding calls to an existing implementation and changing those calls to use a new. But that is to be determined so no doubt we'll use somehow use the demo certainly as one way to do it but we'll look at at other approaches to document and and and make it easier. This also flows into the one point zero zero release which is is this the bar that we want to set for upgrading to zero point zero one point zero zero is is we have ledger agnostic and on credits completed. And and so the indication we're going from zero to one means everyone's got to do that upgrade. My guess is that actually is the right place to do it. It's a little bit painful because it means actually upgrading to one point oh is has a fear it has a higher barrier than we've ever almost ever had. But maybe that's the way we go. We drop we drop in D and on credits in the ask our support or sorry in the SDK support and we require upgrading to an on credits RS. I'm hoping people are thinking about this and have ideas and opinions. Well I think you've been on version zero for long enough like it would make sense to do this as one. Yeah. And it's kind of, you know, even if it might be like a higher barrier it's probably better to have the breaking breaking change of that entity. Now with the reason one dot all then then have to manage it like later on on a minor update. Yeah. Well it could be a minor upgrade where it could be but what I'm saying is like since it's so so. Honour us for for the the user of the of them of the agent like there's some work involved having that kind of clear cut selection of the major release major stable release for sort of kind of like. Yeah, leaving it out there. I agree. It's it's probably beneficial. Um, all right that transition us into the 1.0 release I realized I didn't talk about early on. The status of acupy 012 0 RC one it is done. We did the release this morning. The only thing that's kind of hanging about and I got to have a conversation with Wade a little bit later is. Rather than having the documentation site generated acupy.org generated. From a separate repository I'm trying to get it so that it's generated directly from the Aries cloud agent Python. Repository so we could drop the Aries acupy docs site. So that's my goal in that I'm getting a few hitches in trying to make that happen so I need to figure that out but. In the near future this will just be a byproduct of publishing a release. And and we'll have a way to deal with it after. So right now we've got that you know that ability to do to do all of these past versions. And that's what I'm trying to implement right now is is being able to see the documents at at the point of time of all releases but do it directly from acupy versus what we've been doing here which is doing it from a separate repository. As far as the 1.0 release goes. I don't know how many I should check that. So we've still got this one that did exchange making sure that we're acupy and freedom compliant so completing that one. Yeah I think is that it. Pull request 1.0 we still have the two so this is the same one as as related and then they did rotate those are the ones we want to get done for 1.0. This is the question we talked about a bit earlier which is do we wait for the and on credits RS and this upgrade discussion is making me think that we've got to do that. And then finally the LTS considerations how to do that and I have not spent enough time thinking about that. I don't not able to lead a discussion on that. Yet. Yeah for sure I mean we've we've got upgrades upgrade processes so far and and including this as a way to you know I'm sorry I'm answering the chat question. You know having a way to upgrade a a. A wallet in sync with upgrading a controller I think is necessary this. We just can't do this. In the SDK to ask our style stop everything run the upgrade and go forward in any place we're using multi tenancy. Seriously so I think we have to come up with a way to. To implement it in in some generic way so. We've got to rethink that we've done two passes with with upgrades one was as I say this in the SDK to ask our upgrade the second was the upgrades when you go from release to release. And and we've been successful to this point it's handled all our needs so far but I think it really definitely changes. When we get to multi tenancy because we cannot expect all tenants to upgrade at the same time so we really have to make it so that. You know unless you know unless absolutely for upgrades are done on a per tenant basis not on the entire database. That is going to the other thing that impacts that is the scheme which they're using for multi tenancy. Whether you they're using database per tenant or sharing a database across all tenants. So in the case where you're sharing database across all tenants you're going to be forced to upgrade everybody at once. But that is only when we actually change the database schema and I don't think we've done that. So it depends whether you're changing just data or whether you're changing the schema itself and I don't think the like the Postgres schema and I don't think we've changed the Postgres schema in. Since beginning but but I'm not sure of that. I'm not right but is the acopi instance going to be able to understand. Like the is the upgraded acopi instance which is being switched over to use a non creds. Records and structure going to understand the old records and structure if it comes across it I would say no. It's not even expose the old APIs. What I'm saying is that the. It's it's just data. If you will it's not the schema of the database the rows and columns of the database that are defined in Postgres. Those are not changing. Yeah. So that makes it a lot easier for a single tenant to just upgrade its own data. If we get to a point where we actually have to change the Postgres structure change the rows and columns that that changes it. I agree. But but that point we're okay that but that means that the multi tenant acopi instance has to support both the new and on creds and the old APIs at the same time. Oh I see what you're saying. Yep. Yep. Okay. Well this is the conversation we gotta have we gotta see if we can be done. Yeah. Yeah. Yeah. I get what you're saying. Yeah. I think yeah. I think we're fine. Sorry Stephen. Does acopi tell us whether it's busy doing something or it's sitting idle. Yeah. There's precedence for doing that for for semaphores and whether it's doing something. You mean for locks and things like that. Yeah. Just to know whether this is the right time to do an upgrade. Is it in the middle of something or is it is it kind of sitting in an ideal state and all the transactions are completed. Exactly. Yeah. I know what you mean. I well I we need a definition and what can be done again. Wait I'll lean on you when it comes to this. But basically the strategy as far as I know is you you on a Kubernetes type cluster. You would indicate that you want to transition all of the things bring them down to zero if you have to and then start them back up. So it is possible to bring all of the define a way such that a a instance no longer accepts for example any inputs and completes processing in a graceful way of all the inputs they have such that you can bring it down. And start up new instances that are new style and the the question is can you do that such that everything stops before you start any of the new ones up is kind of the the the the issue that may be needed. The only challenge with that is what Wade was saying is if the new release is not backward compatible and some of the transactions are half complete. They will end up. Yeah. And they will they will not be able to proceed any further and nobody will know about it. I mean that's not guaranteed. In theory there that it could be that those transactions can't proceed or or those transactions might be able to proceed if we update the protocol state objective necessary. So I'm not sure. Black and white is that no true. It depends upon the support for backward compatibility. Yeah. Yeah. Tough calls tough decisions here. And then the other side of it is the more backwards compatible the more complicated you're making the code. So that's the that's the balance we'll try to we'll try to get to. Okay. Last topic. So there's definitely more to come on this upgrade and and there'll be conversations I'm hoping this week where we could talk about this because as I say I think this puts us as a something we need for the 1.0. And therefore huge priority to get this done. So I'm really hoping we can get to it. Last topic was the Hyperledger mentorships. The big thing here is a call for mentors. So I was a mentor for the first time this last period. Our project was to have a mentee that documented all of the cryptography in an on credits into the specification. So a combination of using the source. Academic material the the the actual documents that outline CL signatures the CL signature specifications. Looking at the code verifying the code matches the implementation and then putting the that information into the specification in a way that could be used by anyone. And so that was the effort and it really was successful. Eritrea Eritrea did an awesome job uses a student in. India and and and did a fantastic job at implementing that. So highly recommended. But any project we do requires a mentor. So do think about if you can be a mentor for somebody who does it. Basically we once we got ramped up so we had some ramp up time and then it was about every. Second week we got together to talk things over had a discord channel it was Michael Otter myself as the mentors and Eritrea. As the mentee the discord channel handled everything in between. Obviously while at school which Eritrea was he had times where he could focus on this and other times he couldn't so all of that flowed really well and we got everything done we would we wanted and I've been sending out. Happy recommendation letters and notes to people about the great work he's done and that he's a guy who's interested in cryptography and rust work and has rust experience and so on so highly successful for both sides we got a lot done on the project but. Eritrea got a lot out of it as as the mentee so a couple of things I was thinking of was a full pass over all of the documentation and demos and updating everything across occupy. Looking at MDL support and how we would add it. We've had some highly successful work in adding things like W3C credentials SD Jots. So we look at investigating designing and adding MDL support into occupy and and at least. Laying a path on how it might work and a proof of concept perhaps. We have not. Take an advantage of the fantastic work that in DCO did on airy socket doc and and combining it with occupy I would love to see that done so we could get. What I would call hyper scaling mediator we've done a pile of tests with a creta all of which included mediators and we were happy with what we've got. But it would be really good to have the capabilities that socket doc provides in enabling a really scalable mediator where we can scale the number of occupy instances scale the number of of socket doc instances and and have as much capacity as we need an hyper scaling mediator. Tracing support would be another one I think that's partially implemented but but enabling it so that it could be turned on and off in certain scenarios and we could get tracing out of out of occupy in an easy way when weird things happen. And we want to be able to investigate that. So those are some of the ideas we've got a couple of ideas in an on credits anyone have any other things that they would love to see implemented but just don't have the time and might might be suitable. I do have the advantage that I spent part of yesterday thinking about these things or last couple of days thinking about these things but that's why I wanted to raise this is even if you don't have one now if you have any ideas that you can come up with. Yeah it's a fantastic program for doing that. Applications are easily written it's you know it's a page page or so of work to do the or to provide the challenge. There is a review process and and last year there were so many that only some were accepted so there's an acceptance process. We had something on the order of 35 applicants to our our challenge and we narrowed them down to a few had interviews and picked one. So it was an interesting process got to got to come up with a scheme for evaluating and deciding on the top candidates and and doing an interview process but all really positive and helpful. So if anyone wants to be a mentor. It's a great idea and really useful for getting stuff done on a project. All right with that we're out of time. So I will save that stop sharing and if anyone has anything else they want to raise you got two seconds to put your hand up. Three seconds. A few more. Awesome. Okay. Have a delightful Tuesday. And we'll see you again a couple of weeks. Take care. Take care.