 All right, welcome to the January 23rd, 2024 Aries Cloud Agent Python user group community meeting. Lots to talk about on the agenda, number of status updates. There's a whole lot of stuff going on in the community. Talk about the next release and make plans for that. I could buy a roadmap for 2024 and in doing the next release, we'll probably cover the PRs and issues. As we get started, reminder we're recording the call. So we'll be posting that. I'll be posting that after this. And reminder as well, since this is a Linux Foundation, Hyperledger Foundation meeting, the antitrust policy of the Linux Foundation is in effect, as is the Hyperledger Code of Conduct. Any announcements, any people that are new to the call that want to introduce themselves and any requests for topics on the agenda. The mic is open for anyone to step up. All right. Hyperledger annual reports are being available. I should put the links in. We have draft reports, annual reports for Aries and on creds in Indie available and welcome input and contributions to those. And IOW is coming up in April. I believe it's the 16th to the 18th, I think. So if you are planning on going there, it's time to make those plans. All right. Let's go down the various things that are in progress and see what we can do about status updates and who's available to give them. Ian, you're here on an on creds RS in Akapai. Do you want to talk about that briefly? Yeah, I guess we're working on the updates to add the endorser functionality, which is one of the gaps. And there's a group that's starting work on the non-creds. I don't know if we're calling it a non-creds W3C. Yeah, a non-creds and W3C format that's here. Yeah. So that's all underway. Good. Yeah, so do we still have that in on creds branch and should we delete it? The branch is still there. I think we can delete it at some point. It's nice to have just so just in case there is something that I missed when we pulled the code over but it won't just still there. So Jamie Hale has been working on the endorsement and so that will be available soon. He's got through the schema. So he's got one of them done and then working through the next couple of object types. So that's good. Let's see. Did Peer and AFJ interrupt? Where are we? I noticed Daniel's not on the call, Daniel Bloom, unfortunately, but I can give a bit of an upgrade, update on that. Did Peer 2 and 4 are in place? There is ways to emit those. In other words, that when a ACAPI agent is initiating a connection or actually I think in responding to as well, they can send a did Peer 2 and or 4 depending on how you've configured your ACAPI instance. The thing that's missing and we'll talk about it down here as far as handling dids is did Peer 1 is supported, but we have too tight a constraint on the types of dids that are allowed. And so we've got a bug in there that we need to fix. That's actually in a couple of places. So we need to do that. Daniel was going to take a look at, he's got an environment for testing AFJ interrupt and we'll see if we can get him to update that. But I know one of the issues is this two constrained did validation. So we're doing did validation in places and we're not allowing did peers, for example, and we've got to eliminate that really quickly. So that's a big deal. Next up, low generator testing. We continue to do that at BCGov using Aries Acreta. Looking good. So we were able to accomplish in a open shift environment about 200 credentials issued per minute. So a thousand over a five minute span. Looks like when we go above that, we're hitting a few database issues. So we're gonna be, we will eventually look at those, but the thousand in five minutes or 200 a minute is sufficient for what we've got to do. So we will produce, we're trying to formalize a report out of that. We're now gonna adapt the Aries Acreta script to do, enable us to do an end-to-end test where instead of hitting an ACAPI instance only or attraction instances we're doing right now, we would actually hit a controller that is hitting an ACAPI instance. So stepping out beyond one step outside of that to enable a test and see how fast we can issue credentials where we're actually going end-to-end. I would point out that the low generator test we are doing with the thousand in five minutes does include that's a connection for every issuance. So there is a connection establishment happening plus the issuance, the credential we're issuing is a revocable credential, although we're not doing revocation as part of it, it is a revocable and we are using a mediator in all of the interactions that are going from the traction instance to the load testing agents. So we're getting a pretty good test involved there and showing things running pretty nicely. Any questions or comments on that? I got a question, Steven. Just say about the mediator, did you test a cluster mediator or just one instance of the mediator? We're using a mediator on the Dev mediator we've got at BCGov running on OpenShift and so I believe it's a multi mediator setup. Is that what you're wondering? Yeah, yes, thank you. Yeah, yeah, yeah. And we did try it with and without the mediator and there seemed to be no difference. Again, we're still doing relatively informal tests and just sort of running things. I would like to get to the point where we're really trying to formalize it. Oh, Wade tells me that it is single instance. So we're not doing multi instance mediator in that case, single instance. Okay, all right, thank you. Okay. Patrick, do you wanna give an update on the work you've been doing and the PR you've got in? Yeah, sure, I can give a quick update. So what you're referring to is the VC API endpoints, I believe. Yeah, yeah. Yeah, so I'm still working on this. I'm looking at what's possible with the difference between the current models for the VC manager and the sort of request that the endpoints for the VC API want to accept. So I'm trying to find the, how to make it all balanced out. So these would basically extract the functionalities relating to JSON-LD credential of the DITCOM issue credential V2 and proof presentation V2 protocol as sort of independent endpoints to carry the action. So you would have one endpoints to sign a credential, verify, prove and store a credential. So that can enable controller developers to leverage these functionalities in other ways than through a DITCOM exchange. So right now I'm looking at data validation and returning properly formatted error codes when there's a wrong payload that's sent because at the moment it's just raised a validation error and I'd like to be able to give a bit more descriptive returns. I had a meeting yesterday with the EM so we were discussing the integration test. Most of the integration tests currently are to test the DITCOM protocols and I think it would be interesting to, I'm not sure, I don't think I'll have time to put it in this PR but eventually to do integration test for these endpoints. If you take, put the input on the issue credential and paste it on the verify credential and test the different proof suites that are available to IKPI. I believe there's two that are currently possible, the ED25519 and the PBLS signature. So yeah, that's pretty much it. Is there anything else I should have covered with this? I don't think so. One of the things I wanted you to do both was cover it and then see if we can push on getting a review and an approval for this if we're ready to go. It looks like you are happy with it. Yeah, I had a little bit of work I wanted to put in. Yeah, by the end of the week I'd like to have this merged definitely. I just have like one more push to put, yeah. Okay. But most of the endpoints are there, like nothing's gonna change with the endpoint. I just wanna have a look at the models mostly for the open API interface and how the validation happens. Yeah. Okay, excellent. Okay, yeah, I'd really like to see this in there and completed as well. So hoping Ian and Daniel Bloom are the primary reviewers on this I think. So, and Daniel's had the most work in there. I don't know. He did some really good comments. Yeah, others have done any, but that would be good. Thank you, Patrick. And Akif wanted to give you a bit of time on DRPC support. DRPC is DITCOM RPC. BCgov has a particular need for this that we wanna put in notably that we're gonna be, that we're adding attestation to the BC wallet. And enabling that to occur. Currently, app attestation is implemented via just hijacking the basic message protocol and sending the data back and forth. But it would be much better if there was a specific protocol to enable that or at least not basic message because it makes a mess of the user interface on the wallet. So that's the idea of this is we're adding DRPC support. It's not specifically for app attestation but is a flexible interface that can be used for anything where Accompi is basically just the messenger. It's passing things back and forth between the controllers essentially of the two agents. So with that, Akif, do you wanna share? Yes, please. Yeah, okay. Thank you. All right, so here we go. So yeah, as Steven mentioned, like this is the protocol that I'm implementing right now. So basically what it is is it's an agent message that wraps the JSON RPC specification or an object. So over the last week, what I've been doing is implementing based on my understanding of JSON RPC and how it's supposed to work, the different data models. So I wrote up this sort of wiki on my GitHub just outlying what the data models would look like. So I've implemented this JSON RPC request object which will sit inside the didcom RPC request and then the response and error objects are again specific JSON RPC objects that sit inside this didcom response object. And then I've just mapped out the different error codes. So I can show you. So what I've done is I've, we've decided to implement the code as a plugin. So it's not directly in Acobi but it'll be in the Aries Acobi plugin. So it's opt-in. So effectively, this is the code showing what the models look like. So here we have the RPC request model, the response model, the error models. And then these would be the sort of the messages that would wrap or that would contain those requests. So I was trying to look at examples of how everything has been implemented in previous protocols and plugins. And it's kind of just going on about this on my own understanding. But based on the specification here which requires that there's some sort of state to be tracked between requests and responses between the client and the server. I just, I thought that perhaps maybe the request that these should extend the base records since they're meant to be stored. I wasn't sure if that's exactly correct. So I'm hoping that when I push this code that somebody who has more expertise or long-term expertise with Acobi can tell me whether this is correct or not. So anyways, I've gone ahead and done that. It's got validation completely built in. I've also gone ahead and started writing up the routes and also the JSON objects that the user would input into the route. So for now, there's two routes. There's a post endpoint to send a request to a server agent and then a response endpoint from the server back to the client. So I have a local Acobi instance running or the plugin running and I can show you what those endpoints look like here. So as I mentioned, there's a precondition that the connections know each other. So what I've done is I've added the connection ID as a parameter here. It's a required parameter. So essentially what it is is you would establish your connections and then you could send a DRPC request or a response. So as you can see here, the request basically is just an object that has to conform to a JSON RPC object. So this all has validation built in. So this is a valid message, but if I, for example, remove a required parameter, you can see that right away you'll get a 422 back. So you'll get all the relevant messaging back to tell you what's wrong with your request or response. The cool thing is this also, the request can be an array or it can be an object. So you can have multiple requests at the same time and it'll be valid. And then this is the example response back. So I guess the other question, based on what the RFC says, it needs to have an ID, a type. I've added the connection ID in there. And then this would be what the, sorry, the request. So when a user makes a request, this is what they would get back. They'd basically create a message that would then be sent off to the server. So inside that, there would be like, what the request was that they, the JSON RPC request that they made, which connection it was, the ID, the type, and then there would be a state associated with it. So for that, if you look at the request. So what I've done is I've taken agent message schema and I've extended it with the DRPC request record. And that's where I'm getting this sort of structure that looks like this. Now, I don't know if that's correct. Again, I'm basing it on what's in the specification. And other examples I've seen is that this, oops. This object is usually nested within the agent message, but I've extended it so that it's not, I've extended it so that it's at the same level, so to speak. So, yeah, anyways, I have all of this kind of written up basically the next steps is essentially being able to post the message, store it, receive it on the other end with the handler and then the controller would be notified I guess through a web hook. So that's all really straightforward. I don't anticipate that that's going to be a problem. And then writing the integration test between the two agents. So, yeah, I just wanted to give a quick update that this is coming along. I think I probably will be done this within, I don't think it'll take more than a day or so just to finish up the rest of the bulk of this. Because most of it was just figuring out what objects to use and the validation behind it. So a lot of it, it's all majority completed in that regard. Yeah. Patrick, you have a question? Yeah, I can relate to trying to understand specs. It's not always easy. Do you know, have you found like any other implementation of this or any sort of way to validate the work? Once it's working like with itself like to validate against other implementation is there like any leads for that? Yeah, so what I did is I followed a blend of the basic message protocol and the action menu protocol. I think those are kind of basic messages is way a lot more simplified. So it's not really useful in the sense that there's no request response in that protocol. But then action menu has components of a request response. Again, they use, I think they use base record as well but they might use base model. And then I didn't wanna go so far as to use an exchange record because I don't think this requires that. But again, if anybody here who's an expert with the code could maybe let me know whether I'm on the right track. So in terms of validating whether this will work I think I've written actually a ton of unit tests for actually validating the message because that's majority of the protocol is that the objects and the requests and the responses conform correctly. So air objects have to be of a certain type. They have to have a null ID, for example, they have to have a code and an error message. The request and response messages have to conform to a specific structure. The majority of the work has been just making sure that validation is correct. So I think right now I was sitting at 100% unit test coverage and then I wrote up the routes and that dropped it. So I'll beef that back up to 100%. And then the next, most of the other work will be like just making sure the integration test works. So if I send a request that I get the relevant response back, if I send a bad request that I get an error back. And when you say these errors, you mean like when you send a post request like validating the request that was sent? Yeah, so for example, like I said, let's say for example, I take out this JSON RPC, which is required in the request object, JSON RPC object. So I just wanna make sure that I'm gonna get this 422 back and it tells me that I'm missing data. So that's all built into the unit test. Yeah, that's interesting. It's similar to what I was saying like validating the request on the VC API. Like it'd be nice to have a way to handle all the errors on the common pattern and return like clearly like this is the error code, this is the message and so on. And I was looking at the AIO HTTP catcher module, which is a sort of middleware to handle all errors. And but yeah, maybe we could connect on that and discuss like your approach for that. I'd be curious to... Yeah, so for sure. I think the main, I'm actually relying on Marshmallow. So Marshmallow has all the validation that it does. And then based on that, like the error response that's returning is directly from that package. So I actually haven't implemented any error handling in the route yet. It basically it's this AIO HTTP spec and Marshmallow. So actually let me... It's what I'm looking at too, yeah. So basically what happens is the request schema and the response schema are basically the ones that are doing the validation. So if the request schema doesn't conform to the right structure then this automatically will return errors. So as you can see like the request schema requires this request and all of this has got... It's already got validation built in. So it has to have a specific structure in it. Yeah. Yeah, no, I'm on the same... I'm also playing with the Marshmallow schema as in a look at this now. Maybe what I'll do is I'll post this as a draft PR and then you have it as a reference and then we can also chat over Discord. I mean, yeah, that'll be really useful to have somebody we can bounce ideas off each other. Yeah. So yeah, like again, if anybody's on this call that is specifically an expert on like building out protocols and stuff, whether this is correct, like I've never seen any examples of anyone actually extending agent message schema with a specific schema of their own, but that's what I did. And yeah. So that I think, like I said, I'll have a more comprehensive update and hopefully a demonstration soon of this. Excellent. Yeah, I think doing a PR is always good. Just flag it as draft. Yeah. The draft setting in GitHub and then at least people can take a look and start to give you feedback easier. Yeah, sounds excellent. All right, thank you. Yeah. Yeah, action menu really should be the one that's the closest. It's very similar what we're trying to accomplish with it. Yeah. Oh, look at that. My new mute button worked nicely when the coffee machine went off. Okay. Okay, any other status updates? Anyone else wants to talk about work in progress that they've got going? There's a lot going on. Okay. Oh, Sheldon, yes. I just wanted to back up a bit to the did peer stuff. So in Aries agent test harness, there is now a full suite that will test interrupt between frameworks for period one through four. Of course, that's only working right now with ACAPI with period one, or sorry, period one and, or period two and four. Yeah, like you said. So the next step is to, we want AFJ to play nice in this new suite of tests. So there's some work there that we could use some help with in the community if anybody wanted to do that. So we have to upgrade the AFJ back channel controller for did exchange and out of band. So any help there might be appreciated. And all this is getting ready to also do the did rotate tests in the future. And then, and Sheldon, I think you also need that ability that you've got an ACAPI to restart in the middle. Yes, which is there now working. I don't know. Obviously that's going to be different for every framework. Yeah, exactly. That's going to be on a case by case basis. So that'll also have to be done awkwardly for AFJ. AFJ actually can restart, but it only takes the transport protocols as reset settings. It has to be able to take a bunch of other stuff as well. So yeah. Okay, next up is the next release. I think we're ready for a new release. We have 53 PRs since we last released. So 011 came out the 24th of November. So these are the 54 things. So I think we're definitely ready for a new release. Some of these will be dropped off because there are things that were in the EnonCreds2 branch and things like that. But we'll see. I'll go through and figure out all of the things we need. The thing I was wanted to raise with the community and I don't know if people have opinions. Do we call it either 11-1 or 12-0? Depending on whether there's breaking changes and there probably are breaking changes, but I'm not sure right now. Or do we call it, is it time to go to 1.0? We did do that failed attempt at calling things 1.0 a year ago and then backed off because we didn't feel we were far enough with the AIP2 support. Somebody in the community raised the question about what we had in AIP2 and what we are missing. The only thing we're now missing is some DITCOM V2 support and please act. I'm not even sure what this, we'd have to check what this is. This is very low level. So we need to look at that. But there used to be a bunch of things in here and now we're down to pretty well nothing. Please act will not be done and will likely be removed from AIP2 since no one's done it and it's super hard to do. Doesn't really make sense to do. So if the bar for 1.0 was AIP2 support then I think we've got there. Any comments from anyone on that? This is where marketing would be a huge help because I just keep incrementing the number. My gut feeling is to go for 0.12 but that's just my little opinion. Yeah, I mean the other way to go is 0.12 and then immediately do a 1.0 as what amounts to 0.12 and then we don't risk the mess I got into last time I tried to go to 1.0 without knowing the future. Okay, that's good logic is to say 0.12 and then because there's so many things in it and then we simply declare that after 0.12 is 1.0. Yeah, I would say 0.12 and after that 1.0. I feel like there's a lot of new features added so they should be released and waited a little bit and if everything is well then go to 1.0 like... Good note, good logic. Okay, I like that thinking. I'm gonna go ahead and do a 0.12.0 RC0 as soon as I can. So I'll put that on my list to do and get that release out the door. It will of course be an RC0 or release candidate we won't just go directly. The things that I wanna get in this release, this did validation. I think these are really easy ones but I'd really like to get these done cause they're just painful. So 27.14 was where we actually are able to do... We're able to do a connection with a did peer too but then we fail on credential request because the holder did is being checked and the holder did should not even be checked at all. So that one just is a pretty simple should be just a really quick change. The next one is 26.10 where we receive a did peer one and we say, oh, we don't like did peer one even though we can process it and handle it. So again, it's just a matter that our did checks are not being updated as we update code in Accopy as we add features. So those two definitely to be done would be very good to see did rotation completed. This has been on our list for a bit but we just haven't, the person working on it just hasn't been pulled in other directions. So did rotate, there's an issue for it and then a PR that it's partially done that Daniel Bloom did. And then we were gonna take it over but just haven't had the resources to do that yet. And then the PR we talked about earlier from Patrick that I'd like to get in place for this one. Any other PRs or issues, capabilities we'd like to see in a new release immediately? There's a pendant thing for me, it's very minor is just to update the PIDED package. So we, me and Daniel did a new release for just a small validation on the documents like a service type to enable lists. A raise, yeah, lists, yeah. Just need to raise by 0.1. Is there an issue for this already? Yes, yes, there is, just need to do a PR for it. Okay, the package is released, just need to, yeah. If you wanna look up while you're on the call what the issue is, let me know. Oh yeah, yeah, I can give you the number. Thank you. There was another issue, this is relating the traceability stuff. So how the verification goes to fetch, like the verification endpoint goes to fetch the verification method for did web verification methods specifically. There's a currently used a pi LD library and tries to frame the verification method ID. And there's some kind of bug with a specific context. So I'm going to get a small PR in in the case of did web verification method to go fetch the verification key in an alternate way until that error is resolved. Okay, and that's the one where, yeah, did the work around PR until, yeah, okay, good. So instead of framing is just resolving that the document and go find that ID manually with a Python. And you're the one that knows that one, so you're gonna do that one, right? Yeah, I will give it a good shot. Good, thank you. I just need to find a way to resolve the data. I just need to look a bit, the resolver endpoint, how they resolve, like the functionality is there, I just need to bring it to that endpoint, so. Okay. Another thing that was put in is just add support to register did web. It doesn't do anything, but it's a way you can all use web as the when you registered it in your wallet. I would like to explore in the future, if you registered did web to also register along with it, verification method ID, like statically, so you wouldn't have to provide it to your issuance endpoints, because currently if you issue with a did web, you need to provide the verification method in the options field, and it would be best if ACAPI could know, like for a very, if you registered did web, there would be a default verification method associated with it, but that's a maybe that would be like a nice to see. But this would definitely be. Definitely don't understand that one, so yeah. Okay. So we'll leave about that. Yeah, that sounds good. Okay, any others? Excellent. Okay, so action item is, okay, next up, areas cloud agent roadmap. So as mentioned last time, we have to each of the projects in hyper ledger have to this quarter produce, not a quarterly report, an annual report, and there's a process for it. There's a new template that we use for an annual report. So that needs to be done. I've gone ahead and written drafts of the various annual reports. So anyone who's been on either an Aries in Indy or an on creds meeting has seen these various things in here, definitely looking for input on this. I don't want, you know, I want to make sure on capturing the goals of the community. The goals for the Aries report were very explicitly discussed in the Aries working group call from I believe two weeks ago that Sam led and it did go over these things. So this is, these goals do have community feedback and information in. So from an acopai perspective, I just sort of wrote things in here that were from that list, but applicable to acopai in particular. So the current work above, basically everything that's kind of in process, those were mentioned in that in the Aries report and are obviously extremely relevant to acopai. So let's get those completed. Did rotate leading to did com V2 and or whatever happens with the trust over IP trust spanning layer. So those did web us is something that we think is important, but we're a little concerned about the direction did web s is going. It does seem to be overly complicated and the introduction of Kerry is going further than certainly we had expected or wanted. But ideally at some point this year, we get a registrar of a did web s did and a resolver for it. So a way that a acopai agent, just like it can create an indie did, it can create and register a did web s did. So manage the keys within acopai, but publish it via registrar and then a resolver. Resolvers for well, fairly easy, although it would need a dependency on the Kerry, some of the Kerry work. Obviously we always want to have better documentation tools and webinars in particular. I wanted to highlight the scalability report, which was coming. We want to come out of the load test generator and then using acopai beyond did common and on credit. So some of the work that Patrick was talking about as far as the traceability suite, the SDJOTs work that Char did, the open ID for VCs that Indicio has done and so on. So those type of things. Release 1.0 and LTS I think is a big goal that I think the acopai community wants to have. And then others would be looking at other VC types, MDL probably being the most important one that we haven't touched yet, really. And so looking at what it would take, expanding the credential exchange protocols, which ties into the open ID for VC support and potentially other ones that are around. Any other items that those that deploy and use acopai think are missing from this list or should be moved around, anything that should be dropped from this list or de-emphasized, cool. All right, I do encourage you to look at the draft annual report. I guess I could copy the clean link and put it into the chat if I could find the chat, but that doesn't seem to be possible. There we go. Here is Ann O'Ola report, close enough. Good, any other topics? I think we've done PRs, PRs ready for review. I guess we should look into, Ian, this is ready to go? Yes, that one's ready to go. Okay, so let's get this one done. I'll make sure we've got reviewers on that. I'm not sure where it's at next, next to it, I'll take a look at that. Yeah, an on-prem schema, changes requested. I believe Daniel's requested some changes in here, so they might have already been done. I'm not sure Jamie's status on that, but we'll see. Patrick's, this will be merged soon, which is the updates to the design of the on-president W3CBC format. Good, and then these again have an X on them. Presumably the documentation is pretty straightforward, but let's see if we can, I'll take a look after this call as to why there's an X on that. It seems documentation is unlikely to have testing problems. Good, any other topics for today's meeting? I'm surprised how quickly we've gotten through this. Excellent, okay, well with that, we'll wrap up and move on with the day. Thanks all for joining, and we'll see you in a couple of weeks. Thanks, Stephen. Thank you, take care.