 All right, welcome to the first ACAPUG meeting, Aries, CloudAge and Python user group meeting of 2024, January 8th, whoops, mistake already, there we go. Topics today, we've got status updates on all sorts of things that are going on. We've got a prep for our next meeting, which is an ACAPUG roadmap for 2024. Patrick is gonna give us a presentation on some W3C verifiable credential traceability test suite work that he's been doing that is just amazing and cool. Lucas O'Neill is gonna provide a presentation on an endorser service UI and that should take us to the end, but if there's any other time, we'll go through any PR issues, people wanna talk about our open discussion. Anything anyone wants to add to that list of topics. All right, reminder that we are recording that this is a Linux foundation, Hyperledger foundation call, so the Linux foundation antitrust policy is in effect as is the Hyperledger code of conduct. A couple of announcements to make is the annual reports and that we'll hear more when I talk about the roadmap, but Hyperledger requires now annual reports that complement the quarterly reports. And so we're in the midst of preparing those for Aries and Oncreds in Indy for this community. And so those are being worked on in January and you'll be asked, everyone will be asked to contribute to those in the community. IIW is coming up in April and plans are starting to be made for that. Any other announcements or anyone that wants to introduce themselves, looks like I recognize everyone on the call this time. So any other announcements or anything people want to say? Patrick. Yeah, just wanted to just mention, so over the holidays I got a little visit from Brian Richer from Digital Bazaar and we made a proof of concept for adding to the VC playground and issuer leveraging ACAPI. So we got it working in the UI, so we managed to get the VC API exchange and a did come exchange. And next on the list, I'm gonna have set up the OIDC for VC exchange. So we're looking at adding another sort of issuing and verifying platform to the VC playground based on ACAPI. So that's something we'll be working on in the next months. That sounds like a presentation for next meeting. Maybe not next meeting, but definitely one will have it fully incorporated. I'd be more than happy to do this. Awesome. Okay. Thank you. Any other comments at this time? That was pretty cool. All right. Status updates and on credits RS work is proceeding. The, and on credits, this is the, sorry. This is an on credits RS in ACAPI. I should mention that. Ian, you've been off for a while, but you're back on it and meeting with Daniel, I believe today. Yep, this afternoon. Okay. And to talk about next steps and plan that out. Next step is to get the endorser integrated in. So we've got that flow and we can then fully use it. Then there's other than that, it's the, how do we upgrade from using credits to an on credits RS? So there's- Yeah, I'll close it. Just more testing. Yeah. And yeah, lots of that. I don't see, yeah. We don't have either Daniel or Jason on, but did peer work, did peer two has been merged so we can both handle at any time received and emit a did peer two. There is a PR that is in final review for did peer four as well. So both of those are nearing completion. We're hoping we get enough time. Jason's going to have to go over to another project for a bit, but we're, there's a possibility he might be able to get through the did rotate part. So that's a PR of it's half done and hoping he can complete it. Should be fairly straightforward. Load generator testing is going on at BC Gov. So we've got areas to create a running. We've got an instance of things that we're testing against. We did a first formal run through this morning and ran into some issues that we're going to investigate. Hopefully have a chat. We're going to be talking with Kim and the team on Thursday to learn more and then keep working on it, but we've got a fairly good setup for running formal tests on that. So we're looking forward to have a report to put out on that. And on creds in W3C format, that is proceeding along. The in on creds RS work is largely complete. We now have the final or perhaps the second, yeah, the final version of what the in on creds is going to look like. And on creds in W3C format, as far as the BC and the VP, we did eliminate for those who were sort of following along, we did eliminate any custom or extra and on creds context. So we're using pure W3C and data integrity proof context. So that's good. We can, I believe we've got a switch that we can use to produce either 1.0 or 2.0 VC, 1.1 or 2.0 W3C BCs. And now getting down to defining exactly how to integrate those two into ACAPI and AFJ, which is obviously the goal of the overall exercise. And finally, we've got a new item that is just starting up, which is did com RPC support, which is driven off of this PR that has been submitted for Aries RFC. So this basically is a request response protocol that uses JSON RPC, which is a pretty straightforward to use and really good for the purposes we want it for, allows the sending of JSON messages, a request message from a client to a server to say, hey, execute this JSON RPC and then a response back in the format of a JSON RPC response. The idea here is that the Aries agent, in our case, ACAPI, would simply be the conveyor of the message and it's expected the protocol itself would actually execute the JSON RPC itself. So Aries is largely ignorant of what is going on as a result of the calls back and forth, but the controller that is connected to a particular Aries ACAPI instance or Aries instance would take care of carrying out the work. Any questions or comments on all of those things? Lots going on. All right, I won't take long on this one, but preparation for next week, if people could start our next meeting, people could start to think about the Aries package and Python roadmap for 2024. As mentioned, the annual report includes a roadmap component. So the idea is we do quarterly reports for all the projects in Hyperledger. Those are becoming fairly routine. We collect input and pass them on. What Hyperledger TOC has requested is that in the first quarter of each year, an annual report be provided that sort of says, here's what we accomplished against our roadmap for next year, but of course we don't have a roadmap from last year really. So we'll talk about what's been accomplished, but then include a roadmap for 2024 of what we would like to see accomplished. So I'm currently working on the report and coordinating with others on that. Items that I see, and I throw this out just for conversation and to get people thinking about it in preparation for next time. The current work that's listed above did rotate, which I did mention and did peer, but did rotate would lead to DICOM V2. So DICOM V2 I think belongs here potentially. I don't know enough about it, but the trust over IP trust spanning layer. I don't know how it varies from DICOM V2 or extends it, adjusts it. I just don't know enough about it. So we do need to take a look at that in the Aries community to see where that fits in and the timing of that. But certainly being able to support DICOM V2 I think is pretty key. Did Web S is a new did method that was published over the holidays in December. A draft is in review at trust over IP and it would be good to add support for that. Certainly something we're interested in doing. Obviously always want improved documentation and tools. A release 1.0 with LTS support, long-term support would be most useful and appropriate. And finally, other possible things, a non-creds V2 depending on the progress and non-creds itself that project makes on a V2. I know other VC types are gonna be looked at, especially MDL and expanded credential exchange protocols. And we already heard from Patrick on that. VC API, open ID for VCs and so on. And generally more open ID for VC support. So those are the things that I just, in a two minute thought think through and going through a little mind map that I came up with. So I would ask that people consider these things. I don't know if anyone has any comments right now of what they want to immediately see added to this, but just something to think about for the next meeting. Okay, with that, Patrick, are you ready to go and I can stop sharing and turn it over to you? Yeah, sure. All right, let me share my screen. Right, so yeah, I'm gonna be presenting some work. I've been undergoing in the last month, month and a half, but this also builds on a lot of exploration of the making between the sort of VC API specification on the W3C and the capabilities that Aries, especially Akapai enables. So what I have now is an API, which would be like a, call it the Aries traceability controller. It's mostly designed to enhance traction. So it builds upon traction tenants, but it's also compatible with Akapai with some minimal changes. So the goal for the presentation, well, first of all, to just disclose the work that I've been working on and showcase it, showcase the alignment with Aries and what was not so much possible to do for now with Aries and maybe get some community feedback on what should be ported over versus what should stay in a controller. And of course, I'll be demonstrating interoperability with the postman test suite from the traceability group. So some of the key features of this work. So first of all, while it's test suite compliance, so it's conformant and interval with the W3C credential community group traceability spec. So they have, they separate into conformance testing. So they have this API spec that you can test conformance again. And then there's the interoperable side of it, which is can your platform communicate with other actors in the space. It leveraged Aries where is possible. So I had a lot of thought around what does this mean and whatever action that needs to be taken, whether that's resolving a did or storing keys or so on. Well, I would leverage the functionalities that are available in ACAPI. And I've also done a minimal usage of Ascar for secure storage of credentials and did documents and status list and so on, which I'm gonna keep improving. A big one is the status list 2021 implementation. I wrote and revocation list 2020, but mostly it's being moved over to status list 2021. There's also the bit string status list, which is sort of the evolution for the VC.automodel 2.0. So the API takes into account the issuance management and verification of the credential status. And finally, did web. So the spec relies on did web. So the API sort of complements ACAPI's key storage method to assign a did web to a key. And the API takes care of hosting the required documents so that you can actually use your did web. So this is just a little list of the problems we encountered. Some of them were like some minor PRs that were put in some generally good improvement of various libraries. So the biggest pending issue right now is there's a problem with one of the library. It's called PyLD. So it's a sort of JSON-LD parsing library written in Python. There's a problem when you have a specific combination of context and service and the did documents. So this is under investigation at the moment. And the reason why this is a problem is because currently with the new routes available and ACAPI to verify a credential when you provide your verification method and use a did web. The software that goes fetch your verification method is PyLD and it tries to expand and frames the did document. But because of this bug, it's unable to get access to the key. So you need to just access it in some other way. So I'm kind of reflecting how this could be maybe worked around. There was a small misalignment in the PyDid library with the didCoreSpec regarding service type. So there was a small PR to address this. And we had discussion and I think there's gonna be more improvement for the PyDid library to do a bit more in-depth validation of the documents. Issuing single-type VC. So when you have a verifiable credential you can have your type in there. For ACAPI, it forced the user to provide at least two types. But in the spec, one type is perfectly fine. And most of the time when you have test suite they will test with credential having one type. So we just remove this limitation so you can actually run test suites against it. Registering did web. So it was possible to register did web already in ACAPI but nothing took care of hosting the did document. So this API takes care of this. Signing, verifying presentations. This will be something I will work on. So for those unaware there's some new routes being added in ACAPI for signing verifiable credentials and presentation. At the moment there's the verifiable credential routes but we need to add some work for the presentations both signing and verifying. So for status management it's expected that the issuer will store all the credential issues so we can go fetch the information. So the API takes care of this with the help of Vaskar like I mentioned. Then managing verifiable credential status through a status as 2021. And the final requirement was like a OWA token requesting endpoint with sort of expiry date on the token. So again the API sort of builds upon the multi-tenant bearer token management of ACAPI and just adds it later on top of it to manage expiration dates. So these were sort of the little problems that were preventing the interoperability. So just a very- Just a matter of interest, Patrick. Back to that PI-LD one, any response on that one? No, there's a meeting. There should be the traceability meeting this afternoon so I'll try to see if someone can address it but no, it's been radio silence so far. That's crazy. Anyway, okay. Yeah, I under that. I'll link the PR, I'll link the issue in the chat after the presentation that maybe someone can shed some light but it's a very strange error. Something about nullifying context and protected terms and so on. Yeah. So yeah, so it's pretty straightforward. So it's an API that you can stand up completely independent of a ACAPI attraction tenant. So you would have your ACAPI or you can even use it with the traction sandbox so the project is open source so you can just stand it up, point it towards attraction and then you can sort of get into the flows with a tenant ID and an API key. So the moment, obviously the solution has its own sort of postgres storage. What gets stored in the postgres is the documents and issued credentials. Those are the two main things and there is some tenant information for the tokens and so on, some hash mostly for the expiration date and there's also the status list credential so for those who don't know how status is 2021 works I'll give a short demo later on. So there's traceability API, yeah. So it's the component that really managed the verification of the status. Did web registration and management or a token request inspiration and then obviously traction takes care of credential presentation issuance. It can verify credentials. It cannot currently verify the credential status 2021. However, I do believe that this should be part of my API. It should be not able to issue it but to verify it, that's a core component of verifiable credentials and the token management. It's also responsible. There's a sort of endpoint to resolve the web which gets leveraged here. All right, so I'll just do a quick demo. So they use a postman. So if you're not familiar with postman, I'll do a little sort of explanation. It's pretty fantastic so far. Been using it even to just make tests for it. I guess just ACAPI it's been working really good. So you configure your environments. You know, you have your environment variable. So I have my traction tenant and my API key that I provided. And then I stood up the solution at a design endpoint. And yeah, so I'm just gonna run the conformance test suite. There's gonna be just a bunch of green check marks and stuff. And I'll go a bit over the results and then I can get into a more sort of step by step demo for the different features. And I'll explain the errors. So there's a very high successful coverage with this. I think there's like two or three tests that kind of failed. Most of these tests for the conformance test suite, it's about validating the input that the API route received. So for example, the credential issuance, you would receive a credential with options field of which signature suite to use. And then they will send you a bunch of negative tests, some positive tests and you need to respond some specific error codes depending on the test that's being run. Yeah, so I'll just let this run to demonstrate this. And then we can get into the sort of the flow from registering it did, issuing credential, verifying and explain a little bit the status list, how it works. And I can explain the few tests that fell, some of them, I don't know, I don't necessarily agree with the, what's the test expecting, but I'll let you judge for yourself. Is there any questions so far? So what you're seeing just to be clear is a test suite defined by the W3C traceability group that is conformance too. And this is basically the core workers on this are the cohort of DHS Silicon Valley Innovation fund people. So it's the transmutes and MAVEN net and DHS folks. So they built this test suite. Patrick has put together a client basically based on Acropy that runs the suite. And as you see in runs it in all but two cases, I think there's two fails out of, well, clearly more than 500 tests. So that's pretty significant, really good to see. And you saw the work he had to do to get this to work. So anyway, just providing that second repeat of a lot of what Patrick said. Yeah, the spec obviously like this framework is mostly designed for supply chain, you know, export and importation of goods across boundaries. So the two tests that failed. The first one here, I don't want this. So the first one is actually fixed. So that was in the PyDid library. So it's the service type should be an array. So according to the core spec here, service type can be either a string or an array. PyDid was expecting a string. Just because of the work that was been done. So we got a pull request then. So this will be able to be changed as soon as the new release of PyDid makes it into Acropy. And I think the other test is a little strange to me. So they wanna be able to provide a proof creation date. So it expects me to have a proof dated 2006, which is a bit odd to me. You know, I think the creation date of the proof should be when the signature was generated and applied. But yeah, so I opened an issue on the feasibility I'm curious to get their opinion on this. So that's for the conformance test suite. The other ones are more based on different workflows and interoperability. So I just wanted to show this sort of little custom collection I did to test every feature sort of independently. And why is it not opening? Close postman and restart it. Not sure what happened there. All right, so you define, you know, you have your environment. You define your different variables that you wanna use. Then you can access these variables and what they call them collections. Which of these collections is like a call to, so you put your endpoint, then depending you can have like post requests get all the different requests and then you can either put in your body like a form or just put some raw JSON if you want to. So you would, for example, request a bearer token and then you can actually save the sort of output of the request and use it and subsequent calls. Postman is sort of being slow right now. So hopefully I'll give it, I don't know if you are. So there we go. So we have our response and this obviously is like the token that traction would generate from sort of the multi-tenancy aspect of it. So a few endpoints. So this is the endpoint that would respond to a did web resolve request. And also this is what the documents looks like for now. There should be the traceability context but because of the bug I sort of omitted it. So obviously we got our verification method that use ED25519 verification key 2018. And yeah, so each 10M gets its own sort of service endpoint for the spec and then I included three types of credentials you can issue. So just a simple credential. So you have here your credential and then your options. So it's very, very, very similar to the new endpoints in IKAPI. There's some slight differences that are important but it's very similar. So you can sort of issue this. So the API is we just approximate, receive the response, how the spec accepts it, sends the payload to IKAPI, IKAPI does signs it and returns it. And then you have your proof added here. The API also sets a credential ID. So this is crucial for the status management and then you can sort of verify it here. So we have the verification status and then we can have a look at what a status list credential would look like. So when you issue a credential that you wanna have a status on it, you put in your options, the, I think I should have added a purpose in there but that's okay. So you would have the type that you put and you would also put the purpose if you want, for example, a revocation or a suspension and how this looks when it's manifested is like so. So we have here the credential status. So we have the actual credential that's used here. So if we click on this link, this is a hosted credential and the goal of this credential is to hold this encoded list. So the list is actually a VC itself and every time that a credential gets revoked, one, so this list is actually a bit string of binary values and every credential, as we can see here has an index in that list. So there's an algorithm that's well explained. So I think you just obviously base 64 encoded and then it's compressed. So you would de-encode this list, decompress it, access the bit of the credential and verify if it's a zero or a one and if it's a one, it means the credential is revoked and then every time the issuer would change the status of a credential, he's gonna obviously modify this list and he's gonna re-issue this credential and hold hosted at the same endpoint. So now if we verify this credential here, we have it verified and then if we would wanna change the status, so I have the credential ID from the previous credential and I wanna flip its status to a one. It's the demo effect, it's looking a bit of time, usually it's faster, so status updated and when I wanna verify it again, we'd say it's revoked. So now this credential that we just saw, it's been re-issued with a new list and every credential has an ID on that list and I can, of course in the case of revocation, you wouldn't necessarily wanna flip it back but if it was a suspension, for example, you can sort of flip the bit back to zero and then the credential can be verified again. So these lists are managed by the issuer. The issuer can publish as many lists as they want and then when they issue credential, these credentials get a unique index on that list and the verifier is able to go fetch the list and verify the status based on that. The last thing, so I'll just, revocation list, I'm not gonna demonstrate the revocation as the exactly same principle, yep. One quick thing, okay, two quick things. One is the status list credential is actually showing the magic of compression when data is all the same, all zeros. That encoded list string there is 112, no, 16K of data. It's a minimum of 16K of data, it can be more exact. Exactly, so if you decompress this, you'd get 16K data. But anyway, the other thing that I wanna talk about though is we could absolutely combine the zero knowledge proof based mechanisms we're using where the holder proves in zero knowledge without giving the unique identifier, the identifier for the credential and support status list 2021 in the same way by providing an endpoint that actually provides a credential with this encoded list in it. So an issuer could actually support both with the same data set. So it's something we should probably be thinking about when I'm thinking about for a non-credits too and in particular the way it's got Alasor, it is absolutely could be supported this way. And then the holder would be able to prove in zero knowledge without sharing a unique identifier or could share the unique identifier and just let the verifier have at it to get the unique identifier and look up the status list. Anyway, just throw that out there. Yeah, I think that makes sense. I need to reflect on it to, but yeah, I think it makes sense. But like I was saying earlier, I think definitely the verification of the status list is something that ACAPI should be able to do for the sort of verifiable credential endpoints. So this implementation I've made, I followed the spec, I didn't cross validate against other implementation. So if anyone here has a status list 2021 implementation, especially for revocation, and they could cross validate the worker, that would be very appreciated. Otherwise I'll reach to some other people in the community. So yeah, finally just signing a presentation. So this is using the old JSON-LD sign-in points at any moment now. It should provide a presentation with all three credentials. Of course, it didn't work. So let me try, I think I did the wrong one. Yeah, so anyway, I'll have to say there's something weird going on with Postman. It's being a bit slow and I don't know if the variables are saving properly. But yeah, so that's for the demo and I'll just quickly finish my presentation. So for the next step I have in mind, so in the very short term, I would like to get more familiar with the PIDED library and actually use it to not only validate documents, but to create the DIT document and see a bit what it's possible to do with this library. We had a bit of a discussion of updating to PIDENTIC 2.0. So maybe it's something I'll have a look. So PIDENTIC is a Python library to validate data. You can set up your sort of data classes and you can do validations against them. And version 2.0 brings a lot of interesting feature, but there's also a lot of breaking change. So that's a maybe add presentation routes in Akapai. This is something I would like to have a look. So like I said, there's the verifiable credential, issuance and verification. I would like to add verifiable presentation issuance or signing and verify route. Interesting question about verify and maybe I'll get opinions of people after is if when you verify a presentation, should you also by default verify every credentials that it holds or should you only verify the proof of the presentation? This is an interesting question to ask. Status list, 2021 verification in Akapai, review the process to retrieve the web verification keys. That's with the pi LD problem that was earlier. So in Akapai currently it sort of frames the document, it's one way to do it. So I'm just wanna sort of think what would be the best solution short-term and long-term for addressing this issue. So I wanna get to the bottom of the pi LD problem. If there's no solution, maybe it would be worth entertaining conversation about getting did web verification keys in another way, such as just resolving the did document and going to fetch the ID. And another idea that I would like to bring is to add an optional time to live when you request a token and traction. So at the moment, it just generates you the token, it embeds the time at which the token was issued, but it would be maybe interesting to add the option for the user to generate a token that's only valid for 15 minutes, for example, or depending different use case. That's it, thank you very much. If there's any question, I'd more than happy to discuss it. Awesome, thank you. We should probably move on because we've got one more presentation to come. Lucas, is the time remaining gonna be enough? I can do it in pretty short amount of time if we want. Okay, okay. All right, thank you, Patrick. Lucas, you're up next. I'll leave it open for you to share and talk about a endorser service UI. Yeah, sure. Am I taking us to the end or was there someone after me? I'll just... No, you got the full time. Okay, screen one, share. It's the architecture diagram or little sketch. Okay, yeah, not a formal architecture diagram, more of a sketch. So the endorser service UI wanna strain, stress that it's the endorser service UI. So it's an interface built on top of the endorser service, not on top of a traction endorser or an ACCPI endorser agent. So I won't take time to go through what the endorser service is cause that's already a hyper ledger repo. Hopefully we have some base knowledge of what that is but discuss that if anyone has any questions too. So the endorser service UI, technology-wise is a view single page app, front end, a node backend and it integrates with an OADC provider. Now, for our BCgov DITP endorser service setup, in OpenShift, we have multiple endorser agent plus service or endorser service pairs or whatever we wanna call it in a single namespace. So the intent for where we'd be running this endorser service UI, at least the backend part that does serve the front end as well is inside that same namespace because the backend is what's going to have access to the secrets that can connect to the endorser service. So the endorser service right here provides a way to get a token. So you provide the username and password. So the backend can do that exchange to get that token and proxy calls to the endorser service. So what's gonna happen is the user is gonna log in with the front end and that's gonna communicate everything in the backend. The backend, you can set up multiple of these endorser service connections, let's say, or not connections, but configuration. Maybe if you're only running one endorser, it doesn't matter and you just have one username and password for the one, but for the DITP usage, we have DevTest prod environments that have the sovereign candy, sovereign tests, all in those pairs. So we want the service to potentially be able or we want the UI to potentially be able to offer you the option of which one you wanna connect to. This is just using node config. So on the backend, node app side. So you'd inject this with environment variables, however you need or however you wanna do in your deployment pattern, just to get it into the container that's running the node backend. So of course, since it's running in the same OpenJib namespace and we can use Helm to install, it would just have access to the secrets that the service is using to secure itself. But you could run that in some other cloud thing, you'd have to manually define the secrets. I don't know if you'd wanna share the secrets across two places, but we wouldn't, I don't think in the DITP use case. Okay, so anyways, real quick. Yeah, you do a typical OIDC redirect flow using the BCGov SSO key cloak service for that and the user gets a token and that token is just your SSO token. That's not your endorser service token. Wanna make that distinction. So this is how you interact with the front end backend pair and it backend verifies your SSO token to see if you're allowed to connect to this endorser. So I'll show it in use. This will look a lot like the traction tenant UI. If you've seen that, it is literally all the same framework, essentially same UI library. So you go to the endorser service, I'm running this locally. So I've defined all those connections and stuff and secrets on my own. Select which endorser you wanna connect to. You probably wouldn't have this if you were just using one endorser service. And also in DITP usage, we'd run three of these one in each environment. So you wouldn't be mixing dev and tests like I'm doing here but this is just for my local testing. So I can log into vsolver and dev. I log in with my IDER redirects me through my OIDC provider, generic OIDC client. So you can set it up however it's needed. I log in with my IDER redirects me back, gets a token, the other token, the endorser service token, and now I'm in. So I'm connected to the vsolver dev endorser service. Options here are I can check connections. Here's my connections in the endorser service, active one. If there's a connection request, this provides an easy interface to accept or deny the connection using the endorser service calls that accept or reject connections. You can go check more details about the connection here, row expander, mostly the same stuff as the tenant UI if you've used any of that interface, refresh table, et cetera. Oh, on existing connections, do the important part for at least the stuff that I use it for is if you have someone set that you wanna become or you wanna allow to auto endorse transactions, you can go in here and change the endorse status to auto endorse or auto reject or manual endorse. It deactivate the connection if you need to. Not gonna action any of these. Edit a connection, set the alias. Not actually showing the alias in the table but maybe that would be a useful thing. Same list transactions. So go see details about a transaction in the interface. There's an edit transaction in the service but that's just a blank body. I don't think we'll actually leave that in. And notice in the service, at least in the ones that we set up, none of the transactions created ads are filled in. They're all null, which kind of makes it trickier to track these transactions. So I'll look into that in the endorser service, maybe raise an issue. It happens in swagger too. It's not just happening in the front end. So then the next part I should show actually is, so I've logged into bsolver and dev here, right? So I'm allowed to do that. Let's say I want to log into candy dev. So I'm like an operational user supporting BC Gov's endorser service setups. You can say Lucas is allowed to log in and manage bsolver and dev but he can't do candy dev. So if I try to log in with that, I don't have access to the endorser service. I define that with simply, now, like I said, it's generic OIDC adapter, but in key cloak, I've set up roles in our key cloak that says, look, Lucas is allowed to be an endorser of bsolver and dev. I'll add them to candy dev. And I got signed out of that. I'll add them to candy dev. Okay, that's roles added. And now I should be able to go log in with my IDER. It was already SSO there. And now I'm looking at our candy dev. Oh, there's the demo thing. That worked right before I did this. There you go. I don't know why. So here's our candy dev connections. So easy to say this operational user who's maintaining this is yes or no allowed to manage that particular endorser service by adding the role. The next part, oh, you can also check server configuration. There's just an endpoint in Swagger or in the endorser service that returns that. But the next part, and I don't know if this audience has seen much of this new allowance things that were added to the endorser service. That's something that I added by Gavin. We've only deployed this to be sovereign dev at this point, our endorser service for be sovereign dev. And this is something that allows you to specify dids, schemas and cred deaths that are automatically endorsed. And so here's an interface to manage that, which I think is probably the most useful part. All of the connection auto-endorse setting is very useful for what we're doing. So you have a list of dids. You can go and delete that did allowance if you really quickly, operationally need to remove something that's become insecure, maybe. You can add a new did here. I won't do that, but there you go. You'd just add the did in there, schemas. Here's the details that you set for allowed schemas. Credential definitions has a bunch more fields. And then there's the config file upload, which again, I'm not sure is new for this audience or not. This part's proof of concept. I haven't even wired up any of this screen, but this is where you have CSV files. I believe it is of lists of dids, schemas and cred deaths that you'd want to allow. And you can either append to existing configuration or overwrite all existing configuration. This is the one where we wanna flash you a million warnings before you set that. Yeah, that's a really quick walk through of the UI. Yeah, I see Patrick has his hand up. Yeah, just a quick question regarding the allow of Schema and credential definition. So you can actually say you're only allowed to request these schema on this endorser service. Is that what I'm understanding? The allowance is, and I'm actually not as familiar with it as maybe some other people on this call so speak up if I'm wrong. The allowance for schema is you specified this schema and it says that someone posting a transaction based on that schema is automatically endorsed by this ledger. So someone doesn't have to go and manually endorse the transaction. Okay, that makes sense. Which you normally would on this screen finding, are you sure you wanna accept or reject the transaction? Awesome, cool. That's that granularity level below setting the connection to auto endorse or the whole endorser itself to auto endorse because you can, like we have one set up that just the one that the sandbox environment that you were discussing just endorses everything. Yeah, yeah, yeah, I'm feeling it with the you can, so you have like the startup parameter but also like when you create the connection you can set per request if it's auto or not. So this is sort of what it's doing. Yeah, and this allows us to, I believe the goal is to precede it with these config files so that we say it is business area, anything they're gonna do that's auto endorsed. Someone doesn't have to operationally come in and manage the connection when they make it or multiple business areas using the same credential maybe. Yeah, I won't speak too much on that because that part I'm not actually familiar with our plans for. Something else that came up. So you had mentioned at one point regarding when you had your architecture diagram, you mentioned something about the secrets and the WT token, like the service secret. Can you elaborate a bit more like what was, what benefits you would have had if you had access to those secrets? Yeah, so these secrets are the ones that say I'm allowed to get a token from the endorser service, right? So I could go into the swagger and get a token and start doing my work or bad stuff if I got that secret. So in order for the front end app to go make those calls, the back end app is gonna, the back end side is gonna get that same token using that secret to make the call. And then the token has to make the calls to fetch connections. This JWT, the different JWT is the one that is just on the OIDC side to when I log in that has my role to say, the back end is gonna check that JWT to say, yes, you're allowed to be doing this call for candy dev and I will access the secrets on your behalf, but not return that secret client side, obviously. Yeah, interesting. Because I had also like some, whereas doing the OIDC part, obviously I started with setting up my own ACAPI agent and multi-tenant and, you know, it's something you would want your application to be able to decode the token to verify that those access rights, you know, and then when I switched to traction, I had to figure out, okay, well, you know, I don't have access to information. So how can I sort of follow the token and like who's access in the platform? So yeah, that definitely key cloaks is a powerful tool for that. I didn't get deployed a key cloak for my use case, but. Yeah, Emiliano and I have discussed for traction, this same sort of OIDC type layer. Right now you showed using API keys, right? And those are not user-based, right? That's, you know, you don't have any identity behind that. So could we have, like this is sort of a, kind of a start of that layer that we could include. Yeah, because yeah, I was interested in like, you know, how in your subwallet you can register did and one feature that I wanted to have is to have API key bound to a specific did, you know? So we, like an API key that could get a token to sign credential with a specific did. But anyway, yeah, that's all. So for the tokens, there's, could be reflected how it could be made improved. Yeah, so I'll just, I'll leave it quickly. None of this is in the Hyperledger repo for the endorser service or anything. That's just the personal repo I've got going. It needs some more cleanup and improvement and discuss sharing it after that. But see, we're pretty much at time, but if there's any other quick question or follow-up, I'll leave it to the group. That was awesome. Looks good. Thanks. Nice work. Any other questions? All right, well, that wraps it up. Again, a reminder that at the next meeting we'll talk Acapai roadmap and definitely would welcome people's input on an organization's input on what they wanna see happening in Acapai in the next while. So that's for two weeks from now. Any other topics and if other people have presentations or things they would like to show off, please let me know. I'm happy to put you on the schedule for that as well. With that, we're wrapped up. Take care. I'll have a great day. Fantastic. Thanks, Patrick and Lucas. Awesome. Thanks everyone. Thank you. Bye.