 All right, recording should be started. Welcome to the ACAPA meeting for May 16, 2023. I got the date right. I believe I have to mention that this is a hyperledger meeting and therefore the hyperledger antitrust policy notice applies. This is, you can read more information. It is in the meeting page, which I will post in the chat. In a second, I just need to close the page and I will start also sharing the evidence that I don't know how this works. I apologize for the tardiness. All right, so putting in the chat, the link to the page if anybody wants it and I will be sharing my screen, which I don't record either how to do it, but that will, there you go. Screen here, I think you guys just see my screen. It's on the meeting page. Today we have a couple of different items on topic. The first one in the list is the date on the code with us for the Enoncred's RS library in Agapai. I don't know if Char, if you guys are ready to provide an update and demo, I believe Steven said that Daniel Blum is sick today, so he was not gonna be able to attend. Yeah, I can give a quick update whenever you're ready. Awesome, yep. Then I'll stop sharing and feel free to share if you need. Oh, great. I'll just speak for now, giving a quick update. So we are wrapping up the revocation work. This of course has been a lot slower than we anticipated. We've been short on resources lately and just the complexity of revocation has slowed down the work, but we're working on moving logic out of the old indie components so that the logic isn't distributed across many components, working on refining the new Enoncred's admin API endpoints and reducing some redundancy we've discovered during in storage during the revocation process. We have finished updating to the more recent build of Enoncred's RS and the minor interface changes along with that. We completed the tails file handling change where in order to solve the circular dependency issue with the revocation registry ID, we're accepting the tails file by hash and just doing basic hash validation on the contents of the file. So moving forward despite being short on resources and shooting for completion soon. Awesome. Thank you. That's gonna be exciting to have ready to go. Yeah, absolutely. Yeah, thanks. Great. I forgot to ask at the very beginning, so I'm gonna do it now before I move forward. If there's anybody new on the call that wants to introduce themselves and wants to say hello to the rest of the group, please speak now. It does not sound like there's anyone. So I'll carry on with the agenda. The next topic is webhook issue. As you may have heard, in the last meeting we had, we encountered a problem with one of our PC Gov users with webhooks being fired twice with the same event state. These was an unintended bug that was introduced with a change to prevent webhooks from getting into an unknown state. These were issued. It will be released in the next release of Akapai. And then I think the suggestion that Kim Ebert made of like actually not firing the first webhook for multi-use connections made a lot of sense. So we went that way. That has been wrapped up and it should be ready to go and not cause any more trouble without fake. We are doing a bunch of work on the mediator function, Akapai and adding Redis persistent queues to it. In particular, Jason Sherman and Jason Sartac are working on it. I don't know which one of you two guys is on the call and wants to provide an update on the work that is being done. Let me know if you need to share the screen. I can stop sharing as well. I can provide a quick update. We don't need to share the screen. Yeah, so Shanja previously developed a plugin to store all events in a queue in Redis. This would be important for a high availability instance where it's in the middle of processing something and the pod gets evacuated or some other issue comes along. Whatever it's processing would get lost. So the idea is to store everything in Redis. And again, Shanja did all that work. And we have a need to apply that functionality to Akapai that's acting as a mediator. Again, it's gonna have a lot of throughput, gonna have a lot of events to manage. So we've been working to combine those two features into basically one instance of Akapai deployed, behaving with that and the Redis. So Sherman, he can speak to himself more, but he's been standing that up and looking at load testing and some performance metrics. And I've been looking at standing up and deploying Redis. That's all the kind of validating all the work that Shanja has done and getting up to speed on it. And now we've been able to deploy it locally with some small errors and corks that we're still trying to work out. But we're close to, I think being able to show something soon that will, yeah, bring these two features together and provide a really valuable feature set. Awesome. You wanna say something? Yeah, so the Kim Eber had done some load testing for the mediator using AFJ and running out of AWS pods, I think if he's on the call then he can confirm, but we wanted to get some metrics from the server side. So actually from how the mediator itself was working. So I was working on getting an elk stack in place, which is now in the main Akapai repo that anybody can use if they wanna do some local metrics testing. And then as we were going through that, we found a bug in multi-tenancy mediation when the tenants themselves are in charge of the mediator. So not like a global setting to say all tenants use this that each one of them does that. So we fixed that bug that's in place. And yeah, like Syro said, we're trying to set up a basic multi-tenants or basic mediator without Redis, get some performance metrics on that and then put up the one with Redis and see how that, what the gains are or if any on that. So hopefully by next meeting we'll have some kind of some metrics on that. But yeah, like Syro said, we're just working out through a couple of little kinks in there, but progress. Yeah, something else I do, I just wanna share. I'm not sure if it's directly related or not, but what I also did was deploy and add Redis and the cluster and the plugin to the areas test harness, the areas agent test harness and run that test suite with Accapai that was actually using this Redis plugin and saw no change at all, which is perfect. So that was something concrete that we've been able to validate that work with. So that's an update there. And there's a pull request to add Redis as kind of a common service in that going forward. So an update on the test harness as well that's related. Awesome, sounds great. And yeah, the areas agent test harness kind of like came to the rescue as an orchestrator. Seems like a great idea. Are probably gonna be doing tests with these type of scenario a lot more in the future. That's awesome. Anybody has any comments or questions about this topic? Does it sound like it? Cool. Next in the queue is Shangjot. We have a couple changes that we're pushing to Accapai to meet some of the requirements that we have from the BC God perspective of hosting multi-tenanted agents. I'll let Shangjot speak a little more about it. I guess the two main items are related to 10 specific settings and how to let each tenant receive their own logs for consumption. So I'll turn it over to Shangjot. You need to share the screen? I do. Okay, I'm gonna stop sharing on my end then. Okay, can you see the hack and re-document? Yep. Okay, so basically first is the ability to configure a startup like Flats, the settings. At tenant level. So basically when handling the sub-wallet, like basically when we are creating the sub-wallet for a tenant or updating the sub-wallet, add the ability to specify settings so that that is applied to the profile, to be a profile associated with the tenant. So basically this is the particular issue and these are the startup Flats that were required to be able to set up at the tenant level. So I do have a PRN for that that is ready for review. So basically the overall in the gist is basically added extra settings, dictionary field to the request body for these two endpoints. So one, the post end point is for creating a new sub-wallet and then for the second one is for updating an existing sub-wallet for a tenant. So basically the extra settings and dictionary would have as the key will have the name of the startup parameters. So all these are supported and then the value would be associated value that you want to set it up. So basically for in this example, a compile log level, it could be in for debug error, critical warning. So this is an example of the other two parameters of Boolean. So basically true or false. So this is the particular example that I included in here. The PR basically it's working. So basically I tested it with the debugger. So basically simply using these endpoints play around with these settings. And then when you basically, I was calling the connection get connections endpoint and have had the debugger. So basically then for the profile, the settings were being updated at the tenant profile level. So that's how I kind of verified that it is working as it ended. So this is the status of the first issue. The second is as Emiliano mentioned to be able to have log access at per tenant level and also kind of to have the access so that it can be consumed or ingested at by Elk stack. So basically you can generate visualizations or you can carry elastic search by the tenant identifier to like look up like all the logs for a particular tenant. So again, this is the issue. I have a draft PRN and ACAPI. So this particular PR builds upon the per tenant settings, the extra settings parameter in the field that I was showing above. So in there, I added another flag. It's basically ACAPI log at Elk use. So this is the kind of a human readable tenant identifier. So basically if the tenant ID, let's say it's 123 and you want the logs for that particular tenant to be identified. So you can put in 123 as a log area so that when the logs are generated, so this is kind of a sample log. So it will inside these braces, you can see the tenant identifier would be included. And then you can carry elastic search by this, like including this within the brace and the tenant identifier in the query you can grab all those logs. In terms of implementation, basically in this particular file, just one single file I put in a utility function, get logger with endless. So what it does is it accepts these settings from the profile and then a logger instance like any Python file in ACAPI that is calling this the logger that is generated like with the getLogger function you include this as a parameter. And then if it would see this particular log.file is already implemented. So this is the, this will be the file path there. You want the logs to be written. So basically the idea behind this is that we want the logs to be written to a file in addition to how it already kind of does outputs it in a console. So it's a course forth. So using the log file, it will create a file handler and add that file handler to the logger instance. So that is for the file. And then the standard out handler would be for the console stream handler. So that is showing this itself to, that is also added to the handler and that it has returned back. So this way we kind of avoid a lot of code duplication. So basically each of those files would have to call this utility function past their logger. And if the logger needs to be updated with the handler classes, it will do that and return that logger. And then with minimal changes we can achieve. That's what I have, Emiliano. Great, I have a question here. The alias for the tenant, is there is any restriction on that on what type of name can you add? We'd be able to use the wallet name. It was used at tenant creation for that possibly or what are the thoughts behind that? Could be anything, just something that's uniquely identified by the tenant. The only concern that I'm wondering is if the tenant is able to specify their own name and for whatever unlucky reason, they pick a name that is already used by somebody else, then we might have big overlapping log statements. As Jean-Jacques mentioned, just to give a quick overview of these two items, the idea here is to be able to generate a log file that has a scoping of each log statement for the tenant to trigger it. That way it can be consumed by a platform such as Elasticsearch or Splunk, whatever your choice is, and then create log aggregation and log streaming for each one of the tenants by sending the correct query parameters in the query and not have ACAPI deal with that. That seemed a little bit overkill and probably not a good separation of concerns. And then for the part about the tenant settings, as you may have noticed, there's a reduced list of what the actual startup parameters are. This is the first cut. We picked these as items are readily commonly used, at least in our BCGov deployments and use cases, but the pattern that Jean-Jacques demonstrated can obviously be expanded to add other primary identifiers in here. Is there anybody else using multi-tenancy? Oh, people, Jason, you have the hand up. Just gotta find the meet button. Is there gonna be issues with file locks there? Like if there's a hundred tenants all spewing out info level logs could be quite a lot of writing, right? So have we considered that, that I'm not quite sure how the grabbing file handlers and stuff like that's gonna work when there's hundreds of people writing to, I'm assuming the same log file? Yeah, that's a good question. I have not thought about it personally, but it did not get that far. I'll let Jean-Jacques chime in. Me too, I'm not too sure, like I'll have to look up and also maybe kind of devise a way to test that to come up with an answer, but I'm not sure like how that will be handled with this implementation. Okay, my initial feeling is that if it is being handled right now already just without like a scope in the log statement, it should be possible to get something similar going without major concerns, but it's a good thing. I think it's like each of the file will have its own, so like a base connection manager, like that particular file will have its own logger instance, dispatcher or something else will have its own logger instance. So those are separate from each other. So that kind of limits like conflicts in a way. Like it's not like there's just one logger that is handling everything. Jason's point though was good. I think we might wanna, once we have the code finalized one, you might wanna try and test it maybe with some of the load testing scripts that they're developing for mediation, maybe we can modify and reuse the same pattern to try something like this. I was just gonna suggest that, I mean this should shake out in a load test of some kind of reasonable size to see if there's gonna be issues there. Yeah, so. Yeah, that's a good, very good point. Thank you. Anybody else with comments or questions about any of these two topics? Just saying, I'll just say I'm pretty excited about having those settings moved up to a tenant level. That's pretty cool. So I think it's a really good project, yeah. Yeah, one of the reasons that like, at least on our end drove the decision is we were trying to consolidate a lot of the deployments that we have and having the tenants being able to specify their own settings, rather than having to depend on the main instance settings is obviously a very, very big requirement. So looking forward to it as well. All right. Sounds like nobody else has questions about the topics we presented. That was everything we had prepared for this session. We can have some open conversations if there's anybody that wants to suggest topics or wants to bring up some items in particular. Yeah. Do you hear me, yeah? Yes, yes. Okay, so I'm just gonna start the video to present myself shortly. I'm working for IDLab and we are a non-profit organization and we are involved in working with a few of the provinces in Canada and I mean during one of our meeting and more on the DevOps side to say, we have the question being raised about hardware enclaves. Okay, so to give you an idea, it's coming from a security point of view whereas instead of storing the private keys inside any software, you delegate that to hardware enclave where those keys are stored and some calculation with those keys are made. So essentially you're exporting the cryptography, part of your cryptography to hardware enclave. So of course in the cloud, there's no such things as hardware separation, okay? I mean, if you look in your phone, you have a specialized module to do that but in the cloud it's not. So essentially it will be created a dedicated instance. So my question is since nowadays, the URSA library has been, well, how to say that, deprecated and the ACAPA is getting back inside some library, the cryptography. I would like to know if there was some, if there was any question before on hardware enclave and how that could interface with ACAPA or it's just something totally new to the community. Thanks for the question. I personally don't know about anything around that. I'm virtually looking at Andrew as he's probably the most knowledgeable person in that realm. Do you have any comments, Andrew, about this? Support for hardware enclave for credentials, was it? Or for secrets for the keys to manage the wallet, in particular Bruno was mentioning? Yeah, essentially the, I mean, and I'm asking for, I mean, some provinces friends, essentially, but essentially the industry standard right now if when you have a private key to manage, you don't store that in your software at all, not in your memory. You delegate that to hardware enclave. So I would like to know if there was some question around that and I'm under the impression that not. And well, after that we'll have to see how we can help in producing that if it's even possible. I don't think it's something that's on the roadmap for ACAPA right now, but it should be technically possible for like your did signing keys. It depends on the algorithms supported in the hardware wallet, but generally they would support things like P256 and ED25519. Okay. For credentials it's harder because we can't put CL signature keys in there, but we could do things like JWTs or SDJOTs using hardware wallets as well. And yeah, I mean, internally ACAPI generally uses dependency injection for these things. So it would be a matter of injecting a remote wallet instead of a local one. Okay. I see. Okay. Okay. Well, I mean, it's a start of an answer. I understand it's complicated topics and it's coming a bit unannounced, but okay. We do generally use async methods for things like signing. So it is possible to make like external calls to do that. Architecturally, a better design might be to queue up those requests though and continue processing after they've been performed because I know some of these KMSs are more efficient if you can batch your requests. Okay, okay. Batch your signing requests and send them all together. Okay. Perfect. Thanks, Andrew. I see Alberto has his hand up now. Yep. Hey guys, Alberto here from Instant. Been following some of the people's here. We have different actors in kind of this SSI ecosystem. We use ACAPI for creating issuer, verifier and of course the mediator. And then we also participate a little bit on the by-fold channel and the group meetings as well. We kind of based our wallets on the open source by-fold. So as of right now, we're kind of going through like just kind of experimenting, right? Kind of connecting them, communicating between them. So right now I'm sort of just trying to, you know, gear myself into just learning mostly. And I know in the past there was a risk condition if I'm not mistaken with the mediator. And I believe since we migrated to Oscar and update to 0.8.1 or 0.8.0, I'm not sure which one had those new updates. I guess I have a few questions. So the first one is I think there isn't any like release notes on how to update it. And just for some context, we currently do not have anything built like in a production environment where we need to migrate things from one thing to another. We're open to start from zero. So what would be the proper way to actually launch 0.8.1 ACAPI mediator so we can get that race condition fixed on our edge? Because we did see the issue on the by-fold. Basically what you're asking is like how to update from the indie wallet to the Oscar wallet and follow that migration that 0.8.1.0? Yeah, I mean, yeah, so if that's the best option, I mean, if we can launch a whole new instance from zero, I mean, we can do that as well. Cause like I said, we're not doing anything right now. Yeah, we have updated our instances from 0.8.0 to 0.8.1 with Barn has completed the process. So I would turn it over to him to provide some input. I know there's some scripts that can be used. Right. Yeah, I've got some notes. I gave them to somebody on Discord some rough notes that I did. That's me. Yeah, that's me. Yeah, I saw that. And yeah, I've been working with it. I just haven't had, since I'm kind of all over the place with by-fold and trying to find the best way to start. And I haven't answered your thread, but I do appreciate the notes. But is it easy? Is it just as easy as just me cloning the repo from the latest that you have on the main branch and then launching that into an instance? Is it that easier? Do I have to do more? No, if you've got an existing indie wallet, then you need to migrate the wallet. So the steps that I gave you cover a few things. It covers an upgrade, if I recall correctly, it covers an upgrade of the storage from an older version of Postgres to Postgres 14. It covers the migration from Indie to Ascar and then the upgrade from a previous version to the 8.1. Basically, you do the upgrade to 8.1 first, and then that basically does some conversion on the connection records inside the wallet. And then from there, you go through the Postgres upgrade and the migration. If you have any issues whatsoever, just feel free to reach out and DM me on Discord. OK. Now, I'm sorry, I'm just trying to understand. So is the Postgres, is that on the acupy side or is that on the wallet side? Well, it's on the acupy side. So it's acupy's wallet, which is actually the worst. So the wallet is kind of being overloaded in this context. Wallet means not only when I'm saying wallet, depending on the context, it either means the mobile application or what we're now calling secure storage, which is acupy's wallet. Oh, OK, I see. That's where I was kind of lost, but I understand now. Perfect. All right. Yeah, OK, I'll do that then. And I guess a follow-up question. So on your end, Wade, then after on your team, can you confirm whether that race condition that we had before is that resolved or is it just better performed? Well, you get better performance, and the race condition seems to have gone away. Nice. And then just one last question. Then I see, I think BCGov has, I'm not sure, we're here with BCGov, but I think you guys have an open source repo with OpenShift. And please excuse me. I'm trying to understand lots of this back and stuff. I'm more of a fronting guy. So it seems to me that you guys are launching various instances of the mediator. Is that correct? Yeah, so basically what the OpenShift mediator is basically a OpenShift configurations wrapped around the areas mediator repository. So basically, it's just all of the templates that we need to deploy the mediator into an OpenShift environment. Right. As far as the OpenShift environments go, we have a DevTest prod. Nice. Now, is the OpenShift done to make sure there's a high availability with the mediator? No. That's one thing that the mediator at the moment is it's due to the web socket nature of it. Right. It doesn't scale very well as far as HA is concerned. There is some work being done around that. OK. And so stay tuned because there are some announcements that will be coming out at some point in time about updates to mediators. So yeah, the reason I asked that is because I'm thinking of a production environment, right, where you can take the mediator to may have where it can have different lots of user requests. I know you guys use Locust. I've used a little bit of that to low test the mediator. No, I'm not sure. I'm thinking, yeah, how we can take it to the next level. I guess you can say that. And I was going to ask you, OK, how do you communicate between the instances? Because I know because of the web sockets and all. Yeah. But I know that that's the issue right now. So part of the work with the Red SQ is to make the mediator resilient. So the Red SQ will have a persistent queue for all of the communication for the mediator. It doesn't deal with the HA side of things, but it makes it resilient. So if something gets knocked over, it will come back up and just continue on from where it was. The way that we're dealing with scale right now is we're vertically scaling instead of horizontally scaling to deal with load. The tests that I've done, and Kim Ebert's done more testing using Locust that I have. But the testing I've done, I've pushed the test to the point where I have issues on my machine with the Locust side of things and occupies the mediator still keeping up with it. OK. Now let me ask you just one last question. I promised last one. So I know the media, all it really does is just relay the messages between the different actors, right? So it's my assumption that it's not really using too much on the CPU. It's not. It's really there. The purpose of it being there is for when there's communications and the mobile wallet isn't available to communicate. OK, got it. Got it. So is it safe to say if I launch a media instance on a big instance and let's say a cloud service like AWS, at least for maybe a couple hundred users, would it be safe to say it'd be OK to use? And we're already running it with a few hundred users. Nice. And it will kind of instance, are you guys running like what? We're using OpenShift. Oh, OK, of course. Right. OK, got it. Nice. All right. That's pretty much all my questions. Thank you, Wayne. The largest impact to scaling of the mediator is the CPU performance of a single core. And so if you're looking at trying to scale up your mediator, you'll want to pick the fastest CPU that you can pick. OK. I don't think Akapai uses more than about two CPU cores at a time. Nice. All right, perfect. All right, thank you. Perfect. Thanks, everyone. Any other questions or thoughts from anybody? One thing that occurred to me as Alberto asked the question about OpenShift, out of curiosity, I guess most people are running Akapai in a Kubernetes environment. How would the community feel about having a HelmChart to deploy agents and dependencies? Any thoughts about that? Any concerns? So, Emiliano, are you asking instead of using OpenShift to use Kubernetes? Is that what you're asking? Yeah, basically OpenShift is a layer on top of Kubernetes. And right now, there is no standard redistribution package for an Akapai deployment. Everybody has to, for the most part, use something that is out there or bake their own. And I'm just wondering if the community feels it would be useful to have a HelmChart that allows to deploy agent and dependencies. So I guess the Postgres database for the wallet, in particular, in this case? I mean, there shouldn't be any obstacle to that. I mean, we at IDLab, we are running the RIS test harness in Kubernetes, actually. So I mean, I wouldn't see any problems for having a Akapai running in Kubernetes. All right. Mostly out of curiosity, that's something that we are thinking about as part of our more operations-like type of activities for PCGov. So I want to also try to get the feedback from the community and see if there were any thoughts against it, in particular. But it sounds like it might be something useful. If there's no other questions, I think we have exhausted all of the topics. The last thing that I wanted to mention, you'll find in the page, there is an AnonCreds workshop. It's going to be hosted and led by Stephen Curran on May 31. There's a link here to the page describing the workshop. You'll have to sign up and join if you are interested in the topic or spread the word. If you think there's anybody that might be interested in joining and learning a little more about AnonCreds. And yeah, that being said, I think that's all for today. We're rocking up a little early, but I don't think anybody is going to complain. I will stop the recording now by finding the controls.