 All right, welcome to the 17th of October Aries Cloud Asian Python user group meeting 2023. We're going to be talking about next releases PRs. And then a couple of presentations, one on some work done that please act protocol and a bunch of good evaluation and proposals to talk about and then an update on where we're going with the load testing for occupied based issuers verifiers and mediators that's part of a code with us from BC gov that in DCL will present. I, we can do a bit of an IW follow up after that if we have time and then any open discussion. As mentioned, we're recording this call so the recording will be posted following the meeting. A reminder that this is a Linux foundation hyperledger foundation meeting so the antitrust policy of the Linux foundation is in effect, as is the hyperledger code of conduct. First off I'll open the floor to any, anyone that wants to introduce themselves it's either new the call or returning and wants to mention the work they're doing. Who they are, and anyone else that wants to do any announcements as well. BC gov posted three code with us in the past week they close this upcoming Friday, the 20th. They all relate to an on creds. I can't believe I didn't put the full name and on creds in W3C format it's up here. To enable using W3 or non creds in JSON LD, and so that they can be exchanged with other can be held by libraries that can handle W3C credentials, and that can be used in presenting W3 credentials. This also enables the use of attaching multiple signatures onto a single credential. So you can have something like a non creds and a missed credential in signature attached to a single credential so we're looking forward to that. The next coming Monday is the hyperledger member summit, October 23 San Francisco in Tokyo I'll be in San Francisco at the sold out hyperledger member summit I went to register, and it was wait listed for a bit. But since I'm speaking at it I got in. But I gather it's, it's full so that's exciting and looking forward to that next Monday. Let's jump to the agenda and we'll get started is. First off, is there anything anyone would like added to the agenda before we get started. Any topics missing from that list. Okay. Hang on one second I do want to. There's what I was looking for. See. Okay. At least 010 for was released. That's repetitive. At least 010 for added yet one more. Item to the series of patches that have been to three and four. In this case. The, the did handling related to the did dock handling related to mediators and using occupies a mediator for a areas framework Kotlin. Holder, so our wallet. That was a specific thing that user needed or a, a, an implementation needed so we've made that upgrade. We do have a fair number of things in acropine X likely 011 zero and so we'll get talking about that. Oh, one more thing on on 010 for we, I don't know how many are aware, but we have a read the dock site. And I failed to get a properly set up a, a change that, that read the docs made as prevented 010 for documentation from going up. So it's unlikely to change from 010 three to 010 for, but as a result, I'm not able to post on that. We would actually have to change the release in order to do that. And, and that doesn't make sense once we've released it, even though the only change would be a YAML file for read the docs but be aware that if you do use that. Read the doc site for acropine. There is no 010 for reference there will be a zero going forward there will be ones at it acropine.org does have all of the releases. So it is available but the read the doc site is not available. So we'll try next. Likely 011 zero. Probably we're getting close to when we want to release it but we can, we can take a look at that. We'll probably have a discussion next Tuesday at the maintainers call of the timing for that release. And then prepare for that if there are things that are important to you or important to your community. Let us know. And we'll see about organizing to make sure that gets in there. As mentioned, we, we have a fair amount and we're fairly close to, you know, what we would consider a zero one zero zero and perhaps that's what we'll call it. We'll see how that goes. I did want to go over a couple of PRs these are are in review ready for review. So, please look at these, the non creds pie test and the revocation API are both part of the non creds work that's going on so that's not on the main branch, because this one should have a non creds label. The other ones are all on the main branch. And so we would like to get these looked at. So, those maintainers and those with knowledge should take a look at this one which is new today it replaces I believe 2545 and I'm not sure the old number. But is a is a re implementation of that one. This is upgrading sub wallets. When we upgrade a multi tenant environment so that we don't just upgrade a single wallet. And, and then here's completion of the rotate of the deep key work for mediation so those ones are ready for review and should be looked at if, if anyone, Sanjad or Daniel if you have anything to say about those PRs. Let us know. If not, we can keep going. I'll briefly comments on on the mediation routing keys one I updated the description for that PR to just kind of the work kind of me entered a little bit as we discovered new things and found stuff out so I provided some updates there that should help clarify what the final state of that PR is excellent. You truly write the best PR descriptions. You do my best. You do a good job and it's much appreciated it makes it so much easier to understand. And, and confirm again, you'll main has everything that's on the, the two and zero 1023 and four correct. Correct. Yeah, with that, that backport. Yeah, the main PR from the backport that we did. For that has been merged. So those. Yeah, all those changes should be in the main branch. Okay, good. Those are the issues to be looked at. I did highlight a couple of things. We had this request about protection. Can we add a way that the web hook URL includes a key or sorry, goes beyond just including a key but actually has some sort of a lot based authentication. Any comments on that one. I'm not versed enough in a lot enough to understand to say what direction we ought to go. I'm just going to go back to the general provided a comment a bit ago that this might be doable by a plug in. Anyone else having enough knowledge on this to comment. Yeah, this is some Tim here I'm just looking at this it's similar to the requirement we have been earlier to have callbacks. Yeah, which is a different approach. And we have seen requirements for a lot, but I'm not really sure how it would work. We have to think about it but at the very minimum are the callbacks should probably be have more security and they currently do I think they're unfortunately maybe never did get the PR merge that we're working on for various reasons. So maybe look at maybe the link to the other one as well if it's still out there I'll take a look at the issues and see. I think it's more of a simpler approach in some ways. Okay, if, even if you don't have the PR if you could outline sort of the approach that would be helpful. Yeah I think it's somewhere I'll look it up and see it is kind of the two maybe complimentary as well and inside different requirements I'm not sure. Awesome. Thanks Tim. I'm not sure where this one was Daniel did we figure out about what to do about multi formats. I think we have the PR to the typing validation library there, and haven't heard anything back on that one yet so I'm. It's. Yeah, I don't know. I'm not really looking forward to having to do a replacement multi formats thing or trying to find something else that it seems like a not fun to pick up and not particularly. I'm not looking forward it's just replacing stuff that should be working. So it's at this point I think it's on the to do list for us to find a replacement unless they come back and are more responsive as well add a voice to it. And then this one I wanted to mention that BC God was doing a fair amount in endorsement, notably work on a endorse service repository where we're adding things like being able to have fine grained endorsement of transactions. And as well, we're working on the, the ledger agnostic and on creds where in that we're going to rework how endorsement works in occupy so that it is pretty much invisible to the controller, unless they want to go see what's in the process but the actual transactions and things will not be. You know, visible to control and they'll just happen, if you will, and they'll, they'll use the areas mediator service or whatever makes sense to do that so. The thing to remember here is this is one we, we do need to look at as part of of that work which is that the revered entry transactions are not yet being signed, and we want them to be signed. So, need to get that one. I just wanted to highlight that one is as being in there and probably be done as part of one of those two efforts but got to make sure that that happens. Yeah, but clarifying question on that one so that the revered entry, I believe, according to most like any network deployments the authorization tables permit revocation registry entries to be submitted without an endorsement. Right. So that's why it's, that's why it works right now for for most network deployments is that the signature is not required for entries. But this is suggesting that we go to having those be endorsed transactions as well right. Yeah. Yeah, so they are being enforced on some and generally we want to have the ability to, you know, sign transactions anytime and with a service, you know it's not a delay. If you're doing it manually obviously it's ridiculous but if you've got a service that handles it, it could be automated. Great. Okay. So that would be the optional obviously. Right. Cool. Yeah. Okay. I don't know if there were other issues I tend to go through them and then highlight the ones I really would love a path, the chance to do sort of, I was thinking this yesterday to passes to go through the issues one that where we just close the ones that are obvious that it's time to close them where we sort of go through as a group of maintainers and decide how to dispense with the ones that are left over so I'll try to arrange that but obviously that's not top of mind right now. Okay. Like to turn it over to. I believe Alexander. Yeah, hello. Do you want to share screen. Yes. Oh, yes, I will share my screen. So we'll talk about please act and the discoveries and information that Alex Alexander is produced. Great stuff. Yeah, I think you see my screen. Yeah, hello, everyone. I'm from DSR company and sometime I'm working. I have been working on please act decorator support in occupy and I have shared some document that describes my ideas how it could be implemented in occupy and let me go through it quickly and discuss it a bit. Okay, first of all, I found that there are some possible options how to how to implement processing of please act decorator in occupy and my first idea was tried to implement maybe some common hand we're all let's say generic code for all protocols and to be able not to implement it in each protocol separately and and to be able to work with please act in each protocol but I found looking at the occupy code I found it is maybe even not not possible at all to implement it without some protocol specific code. I described some some problems that I see in such approach and maybe the more the most important thing it is that outcome version not version outcome option of please act is protocol specific thing and it is not possible at all to handle it. Outside of protocol code. It's maybe maybe it is possible but it will be there are lots of connection between common handler and protocol protocol code and it will be a bad solution I think and I think it is the most most important thing why we need to have some some code in each protocol. I will I will go. I will describe some some details about this. And what else, there are some additional problems that I see, for example, some, some messages. I mean, our messages in different protocols they are. They have different type and some generic code is not able to maybe is not able to find a good. Not good but find suitable suitable type of a message to respond with and I decided that at least for outcome option, we should have specific code in each protocol where it is required to handle please act and option to it is second idea. Option to it is the main idea of this option is to have handler of please act decorator outcome option of please act decorator in each protocol. For example, we have protocol like credential issue with two and this protocol should implement some behavior. Depending on please act if please act is sent is received by holder holder shoot holder must send act in in response. But if not holder must holder may just go to dance state and option to is only about outcome option because I will I will explain option three and it will be it will be clear why it is only about outcome. But since outcome is protocol specific option protocol specific sync. I think we need it implement in some in each protocol and it will be possible to handle it as it is required by by specification of each protocol. And option three option three it is about receipt and outcome actually the idea of option three it is considering of receipt and outcome different differently and make process them in different ways. If if you speak about outcome it is, it should be implemented as a part of protocol code, but receipt option seems like it is some handler for receipt option should be the same for each protocol at least it is, it seems so. And my idea is to design and implement some common handler only for receipt option of please act and in this case, in case of option three option three please act outcome will be processed by protocol specific code, but please act receipt will be processed by some let's say common handler or maybe generic code. Of course, generic code or common handler is not designed yet and it is not. I haven't described everything maybe I it is in it is in process. Still, I mean, how should be in how it should be implemented but since option three is option three includes option two in itself. My idea is maybe to start from option option two. And to implement step by step outcome outcome option, I mean please act decorator support with outcome option in several protocols where it is required. And then when it's done, maybe we can continue with with with it and we can design and then implement some common common code generic code for for receipt option of please act. It's such such such approach seems to be easier easiest how to start this this implementation at least. And I have I have created some POC code for credential issue. Okay, let me let me find this one. Yeah, I have created some code for issue credential protocol. And currently, it is just, of course, it's not a final final version. It's just POC but I tested it and depending on the on the option, let me find yeah, we have some option in in the in the code and if you define. credential issue are required. If you're a send please act with outcome option to holder and folder response with with a message. Okay, maybe it's not so easy to find exact code but anyway, holder sense and a message and issue in issue in issue or change a state of the protocol to done. But for example, when it's not required. I mean, please act option is not not required by. Oh, sorry, if issue doesn't send please act holder doesn't answer with a message and issue just doesn't expect this message is this message and it transit to to stay done right after issue. Right after credential are issued. Okay, what what else about it. Yeah, there are some some questions about list of protocols about unexpected please act. And messages, but maybe, maybe the most important thing I think in in the list of such questions it is compatibility between agents with different versions of a copay code, because if current version of a copay. I am speaking about protocol of issue credential current version relies on some behavior when issue send credentials and expect expect a message from holder, but let's say if if we change this behavior and, for example, issue or doesn't. Oh, sorry, if issue or send. No, if issue has maybe all the version of code and expects a message all the time, always not not only when in some specific cases, but maybe holder is based on new version of code and holder send a message only in case when please act is received. And it may be a problem because. Is your expects. A message as it is right now, but issue or sorry holder holder doesn't doesn't send it and currently I don't know how it could be resolved. I didn't I didn't have enough time maybe to to understand how it could be resolved, but I put this this point into open questions and maybe maybe someone has some ideas how to how to implement it and how work with it. I think, maybe that's it what I wanted to say. I didn't of course I didn't mentioned everything with what is in document but I think it is. Oh, everyone may may read it if it is interesting. And yeah, thank you. I'm on mute. Thank you that was awesome. Thank you for giving me a whole bunch of ideas on what this should be I think we might need to go next to I need to read, reread the, the protocol itself to go through it to see how it's, you know, see how it's documented. But your questions have, I answered a few yesterday or put in a few watching you go through that's given me a few more so I will follow up on that I think we'll see about whether this goes next to just within acupy or or possibly this is a good session in the areas working group meeting that happens on Wednesday so let me think about that and. But certainly, I'll get back to you with more feedback from me, and I strongly encourage others to take a look at this. Particularly those that have had experience in, you know, in wallets and in in acupy. So to get an idea of, you know, what user experience is wanted and and how this can help with user experience. There's some interesting things like the, the please act, is it on a protocol, does it wind up being applied to the protocol or to the message, for example, just the one message that's being processed and things like that and that's why I want to reread the, the protocol itself to make sure I understand that before responding more great work and really appreciate the deep dive into this is very helpful. Thank you. Okay, any other questions for Alexander. Okay, with that, Kim, I believe you are up next or Adam, somebody somebody went off mute. Oh no, Adam just climbed to the top. Kim, go ahead. Greetings everyone. I wanted to go over today. What we've been working on here with the areas create a project. It's a project for low testing did comp based protocols. And it's based off of the open source project locust, which for quick overview there is a open source low testing tool, particularly oriented towards HTTP based protocols. This will be extending locust capability to support did come behavior. Locust already has the infrastructure for being able to scale up and distribute the tests over multiple machines. This allows you to control thousands to potentially hundreds of thousands of simultaneous users using the locust interface. Locust is a Python based environment so the tests are written in Python. They isolate the users using green lit. And in our case we add in the node JS environment for running AFJ. The, there's a friendly easy to understand and use interface that shows you the test results in real time. You can also run that without the UI. If you want to integrate it into some sort of CI CD testing process. Typically in our decentralized identity environment, we have the issuer verifier mediator and a bunch of holder agents. And in this case areas to create a is designed to take the place of the holder agents and run that in the locust environment. Now the, the tests that we've written so far can be run with or without the mediator. I know some environments have both a meteor and some don't have a meteor. Locust here, as we can see can be controlled by a master service. And then there's a bunch of locust workers that run underneath. The idea is that each locust worker would have more than one AFJ agent running underneath it. One of our main goals here is to simulate real world clients areas framework JavaScript is used for many of our current clients out in the wild, both in bifold and other agents. So that was the focus on using the AFJ as the client. Since locust is a Python based framework. We use a sub process to call in and out of the AFJ agent. The other reason we do this is since locust is a Python based project. Even if we were to call into acopire another agent for the client. Python. In this particular case is mostly single threaded. And so when we're doing the heavy lifting of multi processing. We want to split that out into separate processes. With only the admin calls being made to and from the locust environment. When we picked the locust framework, we looked at the clustering and scaling requirements simplicity community. So there's a large community around the locust open source license and making sure it was extendable. We looked at a bunch of different projects. I'm not saying that any of these other projects are wrong. It's just that in our opinion we found locust to be the easiest to extend for purposes and supported both the clustering and simplicity that we were hoping for. Each user in locust. So when we run a locust environment, we specify the number of users that we're going to run and these users will loop through a series of tasks. So one of these is a G event greenlit. And you can specify the number of users you want for your load test. And how quickly those users will be onboarded. And so a locust worker here and we can see her might have multiple user greenlets running. And we can see here that in our case, we're calling out to a sub process from the green light environment. To maintain messages between us in the sub process we use the standard in and standard out pipe for the sub process that we call. And we've configured it in the environment to support you send a command you get a response back. And either the response might be a time out for example. But the idea is that you send a command you get a response and so in this case we send a command to start the the connection with the mediator so this is the initialization stage. So we send that command in and we say we want mediation and if we're not using mediation the port that we're going to listen to inbound requests on for the the agent.js which is running AFJ. It's particularly important when you're not using mediation because we have to listen to inbound requests so that the issuer or the verifier to contact the AFJ environment. So basically we would get a response back indicating hey there was no error, and here's some user friendly result text, or there might be some additional data that's returned with the result such as maybe a connection ID or something. And then here's an example of an error. Some errors inside of AFJ are not particularly helpful. We might not get any error message back, but if we do get a useful error message back that would be returned. So the sub process is just a node environment that you can use standard in and standard out with. And so you can manually paste in the commands and get the responses back to debug or develop the agent.js further. It will also be replaced in the future. So if the, if your client is not an AFJ based agent you could write another agent that fulfills the same interface. Since we're extending locust, we have the locust client dot pi. Well, is what extends the different functionalities that we can run inside of locust. And so there's various commands in here that coordinate the interactions between locust and the agent dot js process and so this is what actually calls the sub process code. Some things in here such as the startup function shutdown function. We also have a port manager function in here. And so when you're running hundreds or thousands of agents you have to coordinate which ports each of the agent dot js clients listens on for inbound requests. There are also cases that can occur where the AFJ client can crash bringing down the entire sub process. And so we put in a function called ensures running to ensure that the agent is indeed running. If it's not running it'll restart the agent. And then we have some helper functions that provide a way to pass commands to and from the sub process so we've got a run command, and then we've got to read JSON line. All the interactions between locust and the sub process are done using JSON. So question. Yes. If we start out, start up a set of client set of holders locust holders. And do they control what they do, or do they get directed what they do. So the locust tests direct the agent dot js code for what it should do. So here's an example here real quick of the, the locust mediator issue. Okay. In this case here. This is what defines the actual steps that get run inside of locust. And so we can imagine that this is like a particular workflow that we want to run. So we can define in here. Hey, go ahead and use our custom client code from locust client. So the locust clients would define our interactions. And we have a couple of different steps once the startup and shut down steps. So we start up our environment and we can shut it down and then we have our tasks and each task will get run repeatedly. So in this case, we are using the sequential tasks set. And so it will run through these tasks sequentially. So the first step is to get an invitation. And so we direct the. And we can go look at the definition here of get invitation and locust client. The get invitation code here. If I can find it. Okay. So here we have an issue or get invite. And what we do is we will call out to the issuer. In this case, it's configured to use the ACAPI interface. But in the future, we could make this into a module for all the interactions for the issuer or verifier. And so each person could specify the behavior for their particular environment. So if you have a custom environment that you want to test the workflow, you would just have to specify this get invitation workflow. And then, for example, the how to trigger an issuance and how to trigger a verification. So in this case, we simply get our invitation. And let me go back to the test here that we're looking at. And so we receive the invitation here. Now we don't in this stage, we've timed this separately, but this could be combined in a single step, for example. Our second step here is first we ensure that our agents running. And then we accept the invitation. And so if we go over to our configuration here. And look at our accept invite, we send a command down to the agent.js code to receive the invitation. And then our third step here is to receive a credential. And this one's slightly more complicated than the others. The first step is we issue a command to agent.js code to receive an invitation, which will wait for an incoming invitation for a timeout period. And then we contact the issuer and tell them to issue a credential. And then we read the results indicating did we successfully receive an invitation or not. And then it just repeats this behavior through this loop. There are more advanced use cases such as the issue and then revoke. So here we get an invitation, we accept a credential, we receive a credential, and then we revoke the credential. And so we can see using Locust, we can define different workflows for that we want to, that a company might have, for example, of what they expect their environment to be encountering. So setting up specific ones is just adjusting these things, like the meat of the stuff stays the same and then you adjust kind of the flow. Yeah. Now, if you have like a different workflow, that's not, you know, individualized. So like here we have broken up the invitation and receiving a credential. You might run into some cases where when you connect, you automatically receive the credential. In that case, you might have to go into the agent.js code here and define a new command that says accept invite and credential. And so in here, right now we have a simple read line loop in the agent.js code that we then case through our different statements of what you can run. And then we fall out into that function and those functions. So for example, the receive, the presentation, let's look at the invitation since we were already looking at that, receive credential. On the receive credential, we set a promise with a timeout. And we put that into a deferred object. And then we set up our listening event and either we use the deferred timeout or we wait for the time delay. And that's how we make sure that the command versus response are one to one. Patrick, you have a question. Yeah, so I want to jump like right at the end. So you configure test runs, you run the test, you get some results. Assuming with Locust, there's a sort of way to visualize those results. Have you explored sort of exporting the results and systems like Prometheus, CloudWatch or any sort of like external results slash reporting systems? So there's two separate ways to receive the results inside of Locust. One is a CSV file. And then the other is through their web interface. And so their web interface results. If we go to the documentation, there might be a good example here. Let me see if I can find one locally, one sec. So yeah, either you can use a CSV file that provides a time-based dump or you get the visualized results. The visualized result, is there a way to make this persist? Like a sort of history or test run or it sort of resets every time you do a new test? It kind of resets each time, but you can export them as PDFs. Okay. So let me see if I can find an example here that I have. So when you're using the web interface, you can export them as the results of your test is CSV. I guess PDF, like Kim mentioned, or like an HTML file. Would you see, because I was just, while you were presenting, looking online, would you see a benefit to like exporting to something like InfluxDB to visualize differently or sort of prometheus and display in Grafana and have a sort of history of results? There certainly could be some benefit to doing that. The main focus here was to make it so that it was easy for each individual and organization to be able to run the tests. Some of the other load testing environments are really difficult to bring up in an easy manner. So here is an example of a report that came out of Locust. It provides, for example, the number of requests made, a number of failures, average, minimum and maximum response times and milliseconds, the request rate per second. And as we can see here, it splits it out into each one of the functions that we have. And so we have, you know, how long does it take to accept an invite? How long does it take to receive a credential? How long does it take to get the invitation? And how long does it take to start the process? And then we can see the response time statistics based upon percentiles. And so we can see that once we get above, you know, 2,700 milliseconds, then we hit our next entry point, which is a timeout, for example. And so, you know, our 99 percentile is 2.7 seconds, whereas our 100 percentile is 120 seconds. And then there's also the charting functionality, which takes all of the requests combined and graphs them. I believe the CSV file might provide a little bit more detail for you. And then we can see our response times. So this is request per second. This is our response times with our median response graphed in our 95th percentile. And then this shows the number of users. Interesting. The reason I was asking, because there's a project like the in the monitoring stack, which already use Prometheus graph and infrastructure and I was wondering if there would be a way to plug this in and sort of get a better understanding between the load testing and the network performance could be interesting too much. I certainly can imagine that this could be plugged into other things as well. And then the other thing that we wanted to do is to add more documentation on getting things set up and running and then type of like common issues that you might run into. So for example, if you scale up the number of users too quickly, you can overwhelm the test machine itself, causing failures on the test machine and how to debug your environment better identifying bottlenecks. So that's where things are at right now. Any other questions? We're going to have to go because we're at time. Much appreciated. That's awesome. I definitely would like to sit down and talk more about, you know, I think tweaking how the tests get invoked. So I see it slightly differently based and we had a bit of a conversation yesterday Kim you and I and and let definitely want to sit down from the context of the code with us so we can talk about that. Yeah. Cool. Excellent. That was awesome. I can't wait to get our hands on that. Looking forward to it. Yeah, all of this is currently on a publicly available repo. I'll drop that in the chat for anyone interested. Okay, and I'll add it to the notes. And with that, we'll wrap up because it's time. Thanks all for attending we'll be back in a couple of weeks. The maintainers meeting is a week from now. If anyone is interested in joining the maintainers meeting I believe it's listed on the calendar and and feel free to join in otherwise. As I say that's just more focused on maintain our activities it's a week from now same day Tuesday. Have a great one all. See ya.