 Okay, so right in terms of release, I think we'll list the known issues in the release notes, right? Like the things that we continue improving, but this is going to be the first release that we will be supporting. Now, for the release strategy, the previous maintainers, they used to release a version of Explorer. And then that particular release was tagged against a version of fabric that it supported. The Node.js version that people can use if there is any dependency on, let's say, the software development tools, those were being listed in the release strategy as well. For instance, if you see over here, it lists Explorer version 118 supported version, version 1.4 to 2.3, which has been tested. And these are the Node.js versions which were supported back then. And every release that was created, there would be a tag created over here. And these tags are maintained like for the lifecycle of the project, right? Now, so that's the release strategy that the previous maintainers were following. We can follow a similar lifecycle for the release, however, one of the challenges that I see with these approaches, if we don't pivot to a specific version of Explorer, it's possible that issues could be reported in older versions, right? At least to begin with, to reduce the support requirements until we stabilize the project. Maybe what we can do is follow a release strategy like the way Fabric team follows. They do a long-term support releases. And then they do a further release, which may get changed eventually, right? So that's the way the Fabric team follows. For instance, we all know like 2.2 version of Fabric is a long-term supported release. If any bugs are reported on that, there is a support available. However, for other releases of Fabric, there is no guarantee that a support for that release would be available or not. So maybe like 2.5 is the other LTS release that they do. We could follow something similar for Explorer saying that, let's say if we start with version 2.0.0, we can say that's the long-term supported or we could not even play many LTS releases. We could just say like 2.0.0 is a major release. However, we will keep improving the product and we cannot guarantee that there is a stable version because we are in process of adding new features and then improving current feature set. That's something that we could follow and we could always claim support on just one version of Fabric. I'm not sure if we want to have support across multiple versions. That's going to be additional ask if we want to do that. We need a testing strategy across multiple versions. So that's about that. And then we also need a release pipeline in the sense any release that goes in must be tested against test suit and make sure the test results are captured before the release goes out. And we could create release pipelines over here through the GitHub actions that do not exist today. That's the other set of activities we must pick it up. I know there are pipelines on Azure, but the Azure servers are no more available will have to create GitHub pipelines for us. So regarding the release, any documentation we need to add, so what are the issues we have solved or like what the new features we brought up? Right. So there is expectation. So for the graduated projects in Hyperledger, there is an expectation that the release.md file should be available or a file should be available, which tells the user like the changelog.md if I'm not wrong, I don't think so this repository, it has. So this changelog is supposed to capture the information of what changed from the previous version. And there are multiple strategies that different people follow for different projects. Few projects, they just list all the commits that happened as part of this release. In this case also I see something like that, like PR, so I listed here. And few teams, they go ahead and they write the descriptions. They say like, these are the new features introduced and these are the new breaking features. And these are like few bug fixes done from the previous release. We could follow the new approach, but yes, we need to maintain the changelog file. So around this GitHub actions plus more container images, right? That is pending with Aditya. So we tested GitHub actions, it is failing at one condition. So. Do we have a PR on that? Yes. I did GitHub CI, it would be. Let me just give me one minute. It's failing on something. This one moved to GHCR registry, is there right? Do you know where it failed? Like if you go to the PR right there, the screenshot of, is it given? Not this one. 382 PR number. Yes. This is a node prone package. Think issue with that. It's being built on, was this triggered from the GitHub actions or was this triggered on a local environment? GitHub actions. Okay. I can check this with Aditya or we'll run this game. And one finished PR is also pending. Is that for the Helm charts? Yes. Right. I think the Helm chart I tried and it has some issues. We'll have to check the Explorer DB part in here on the Helm chart. So I believe the volume was not created as part of Helm chart. Right. Aditya has told to add him the PVC for Postgres DB. I think he has to do that. Right. And apart from that, for this release, I don't know, we are rigorously testing with, you know, taking different scenarios like multiple channels, multiple chain codes. So we were facing issues. Okay. So we are fixing it. It might take some time. We will give some, we will raise issues as well. PRs for that. Okay. So the ledger height was not consistent. Some of the issues are like count with respect to channels that is also not updating with single channel and multi, multi chain codes. Right. It is working fine. So when we go for multiple channels and multiple chain codes, we were getting a lot of issues. In what kind of issues were they? Like ledger height, if we see right in the network tab, we have ledger height. So with respect to, for example, I have two channels, channel one and channel two. When I see the ledger height with respect to channel one, I'm getting the correct this ledger height. But when I switch to channel two, right? Whatever the ledger height of channel one, it is the same data we are getting into in the ledger height of channel two. This is one such issue. Okay. And even chain codes, chain codes also we are not getting properly. So like in the channel one, I have installed maybe chain code one, chain code two. In the channel two, I have installed some chain code three. But in the chain code tab, if I go and see right in the channel two, I'm getting the data of channel one. Okay. So is this a UI refresh issue or is this from the back end? This is from the back end itself. Okay. That's bad. Okay. And is it possible to automate these tests? I'm, I know the UI automation or it's not available right now and explore it. And I, to be honest, I do not know which strategy was followed from the previous maintainers on maintaining the test cases. I don't know if they had a like manual test cases. For instance, we could create a manual test case suit and then maintain status of each of them somewhere. The test cases which were written for very, very minimal. They are mocking the blockchain. They're giving some inputs and testing. That's it. Only like two to three functionalities only. They have not gone through it completely. One of my other teammates is working on it. So it was very, very minimal. Let's do one thing. Let's create a, so for all these tests that you're mentioning, right? I think it's good that we are testing them. We, but we don't have a mechanism to capture the status, right? We could create our own checklist for all these manual tests that are being conducted. And we can mention like these scenarios were tested as part of the release or the regular development cycle. And this can serve as a reference for any new developers. And we could open up on like a GitHub issue or like in our meetup event as well. We can call for people to come and contribute and make this automation possible. We can say here are the test cases that we are right now doing and then the gap to get started with the project is not that high. It's possible for anybody to come and get started. For instance, the low hanging fruit that does not really require expertise in blockchain or the explorer is over here. All you need is the Java programming concepts or maybe Node.js programming concepts come and get started automating these things. And we could also ask Hyperledger to help us push that and then get more contributions in. It could be a CSV file that we maintain on the GitHub itself for now like a record keeping. We can mention the period when the testing was done. We can mention let's say any nodes and we can capture those status of the test for now. So I think the release strategies is important. I think, I don't know, maybe we will follow the same strategy for release naming convention. We'll have a major version and a minor version. And we'll do minor version changes or patch version changes for any new patches. Minor version changes for additional features that we implement and major version changes if we are doing a breaking change. Would that be good? Hello? Yeah. So I was talking about release naming strategy. We'll follow the similar structure having major version, minor version and the patch version. The patch versions are the ones which are bug fixes on the release itself. And minor version changes would be new feature additions and major versions would be breaking changes in the project. Would that be fine? Anything on release archana still? No, Satya. So we are good with this one. Only those two releases, whatever PR spending, apart from everything is okay, right? So some of those two PRs and some of the issues we need to raise and fix it, Satya. That's from us and it's pending. Okay. So the other one we'll discuss about the project plan or this scope of this one? Anything else you want in your plate? That is what, like by June 30, we have to like give this detailed project plan. We have to publish in Wiki. So that also we need to discuss. So Arun, for this scope, whatever we have identified as part of the project proposal. So this plan, if you want to start, right? Maybe what all features will be taken from that? Can we decide on those things? Yeah, we could. So I think we listed a feature set for Menti one. I think in that link, Arun, the day you have created a page, right? Right. So in the project plan, are you proposing to put up the plan only or are you asking to put LOE effort like the exact tasks to be done by when it has to be done and that kind of information? Or is it like you're expecting us to have a high level overview and divide the timelines into sprints and then mark certain milestones during the Menti program? Probably we can do milestone marking. Yeah, that should be fine. Like we can break out whatever the scope, right? So we can have milestones defined depending on that we can proceed. Instead of having every high level definition, right? This is better. So I'm assuming both the Menti's, we have their availability as part-time for six months. Starting from July 1st, we'll have to put a timeline for six months. I don't know if there is an option for us to add the timelines info here. So we'll just talk through the milestones and then we'll add the activities and the goals, right? We can elaborate on those aspects. So breaking down the six months timeline for us, I believe like end of sixth month, we want the Menti to present to the global audience the work they have accomplished, right? So milestones would be global meetup, showcase, demo and showcase of the work that has happened. I believe this also has to be aligned with the Linux Foundation's demo, official demo. So there is, I don't remember the exact timelines. Linux Foundation has across all the projects, not just Hyperledger Foundation, but all the projects where they invite all the Menti's to come and demo what they have done. And this will be a great opportunity for all the Menti's to present there. So this is something we need to talk to Min about and get the timelines. We'll have to align our demo to that timeline. It will be good for us. If I'm not wrong, it was around in January, sometime like mid January. Okay. We can ask Min about it, some lines to be confirmed from Min. And we could have one demo every two months and invite Menti to schedule a call. And this does not necessarily need to be a global meetup. This could project level demo of the progress that the Menti is making. And we could send out inline invite to the labs. So we can have like three check, checklists or three checkpoints in the lifecycle of project. Do you think that sounds good? Having three checkpoints. Is it monthly or two months? So there is one expectation. We will start receiving emails from Linux Foundation. Asking for Menti evaluation. I believe this happens once. So there will be like midterm evaluation as well. And there will be either monthly or one and a half month evaluation request sent to us. And we will be required. All the mentors will be required to send the status. So how do we align the milestones to that? I think at least the mid-term evaluation. Again, the exact time is something we can check with men. Okay. Maybe we could have a monthly checkpoint by every month. We could have a checkpoint. So it will be demo every month. You're mentioning there. So. Yeah. This could also happen as part of the regular experience. Yeah. This could also happen as part of the regular explorer calls. Like once a month we'll have demo scheduled for at least 30 minutes where Menti can present their work. Okay. So starting July and we can schedule it last week of July. I believe August will also happen that way. And for September it could be midterm evaluation for the Menti. And for October, it could be again monthly checkpoint. No, but it will be final demo number. Maybe December is the, we can keep this for the global meetup number will have more monthly checkpoint. So every month we'll have a checkpoint and then alternate months will have midterm evaluation and the final evaluation. Okay. So I think this should be good, right? Yes. Yes. And the initial two weeks we need to also account for onboarding and learning or ramp up activities. And this could be mentioned timelines here. End of June and checkpoint here would be both on mentors and Menti to make sure that we understand projects go clearly and we understand the requirements to work on the project. Okay. It requires its expectation should be set. I'm still waiting for to hear back on like what is expected by all the Menti's I haven't heard back on what is needed. But if they require any sessions then it's on us to make sure that we go sessions to them. So I think we need to prioritize. Do you have a preference of which of these to be prioritized? We would take first the second line item. Okay. So the purging takes the priority and then can we include the bootstrap purge or like the bootstrap activity as well as part of this activity. Sorry, I didn't get you. Oh, no, that might take some time. Okay. Let's see if we can at least have an evaluation done. And then the cycle you will let the Menti understand the project and then Menti come and propose. Let's say by end of July to the group on what their explorations was bootstrap thing. What is the exact thing you're expecting? So the time of load every time we should not fetch those records and that's it. That's right. So the bootstrap somebody to deploy or start using Explorer maybe networks which are like three to four years old, which have millions of blocks and like GBs of data. The time for Explorer tool to be ready state is longer for such cases. Personally, I have seen cases where it can take days together like more than 20 to 30 days for syncing. Right. So that's something we can look into improving the bootstrap time. Yeah, sure. And I believe like we can start picking up the role management and removing the CA dependency. If possible. Because this is going to take some time. And the center section of user management, including authentication authorization. This could be prioritized next. And if by then by this time, if we have you UX design for some part UX, then those activities can be picked up. So this user management, I don't like any like are we ready with the requirement what to be implemented actually. So the team will be ready. Not a problem to take it, but the requirement wise, are we clear like what has to be done or what needs to be done or anything we need to check with another other team. We have to discuss something. So one something that we can start looking into from user management perspective is right now, the Explorer has its own user management mechanism. But in typical corporate environments when if we want this tool to be installed and deployed, they already have an identity mechanism, right? The ID ID, whatever we call identity management or access mechanism that for organizations. And majority of them will follow typical standards where it's either what if we want to delegate permissions or it's going to be like a SSO like we were expecting or SSO type of thing. Correct. Okay. So there should be an option at the time of starting the Explorer, like some configuration parameters and existing identity management solution should be reusable. Okay. It could be done in many ways. We could start looking into supporting OAuth, for instance, and then we could look into OIDC model as well if that's something we want to go with or we could look into the SAML approach. So SAML allows for authentication. However, authorization is still maintained within the tool. Okay. These are some of the models we can start looking into. And we can support any of them or we could support all of them depending on how we want to proceed or at least through this experiment, through one of the support that we provide, we will learn what else is the gap as part of this evaluation. Okay. Yeah. Okay. So after configuration, maybe we can look into the user management aspect that give us good work in terms of adoption, product adoption. And this will also remove dependency with Fabric CA for all that we currently have. So that being said, and by this time, if we have some of the UX, new UX design, we could pick it up. Or if the UX is still not ready, then we can work on some of the metrics, but the network level observability to alert the users if something happens at the network level. So the Fabric metrics that we receive, it's localized to the peers and order of nodes currently like the Prometheus metrics information. However, at the network level, there is no tool that provides the visibility. So let's say the peers are out of sync across organizations. Currently, it's not possible. Like every organization is expected to have their own mechanism built. And since Explorer has this visibility across organizations, we could emit those information in as Prometheus metrics. Okay. And all these features that we discussed so far, they are going to help in product adoption, and they are going to fix some of the problems that today users face. So we could from adoption standpoint. And I'm assuming by this time we'll have the user interface available to start working on. And those could be the next set of features, right? All the enhancement that we were planning in terms of user experience. And then we could also start improving the search experience. Okay. Anything that we have planned earlier. We can further like, again, regroup maybe midterms and add more activities, detailed activities. But this should be a good set of work for close to three months. Okay. Sure. And I'll see if I can elaborate more in terms of activities if possible. Okay. I'll follow this goal if everybody's okay with this. Sure. I don't want to request like regarding this user access management, right? If possible, can you put your thoughts in one document and can share it across the team? Yes. So we have to start with then, you know, what exactly we need to look upon these things. Once we finalize, then we can start working, right? Yes. So I may have a few questions on the current working of the team. I will reach out to you if I have more questions. What I can help with is put up a architecture diagram of the current working and then how the new architecture proposed architecture would look like with the, for the user access management and the possible ways we can integrate or like the possible features we could implement for each of them. Would that help? Yes, sir. Okay. I'll reach out to you. I may have a few questions on the current way of working. Okay. Questions on any other questions on the plan. We'll put up the activities offline. This is going to take time. We'll have to evaluate and then understand at multiple activities under each of these goals. Yeah, sure. Yeah, that we can take it off. So once you update, we will go through that and we can review also. Yeah. And one more thing, Arun, this call, whatever bi-weekly we are planning at, it is happening at IST, right? Like last time we are discussing, we'll move to US time, right? But can you reschedule that Arun? Because still it is at four o'clock only. Sure. So what time would work? I know it's going to be late and I know the team are in place in India. And that's when I was not pushing that hard. Because the problem is, since you're morning time, four o'clock, right? If you're not there and sometimes Aditya will be busy, right? So call is not happening. So like even after waiting for two weeks, if nothing appears under something pending, right, we'll not be able to proceed. So that's what we thought. At least if you are in there, you are regularly joining, right? So we thought we'll continue with this time. Is it okay? Does this happen on Thursday or Wednesday? Wednesday, Wednesday. Thursday you told you have some other conflict, right? So we thought we'll do it on Wednesday. Okay, this time on Wednesday should work. So if that is fine with you, maybe we can reschedule that to this 730, IST 730 something. Sure, makes sense. And wouldn't this be too late? I hope it's not too late for India. I think by like two weeks once, right? That should be fine. Because otherwise, even if it is other purpose also, people are not joining in the morning time. And if you're not getting inputs, right, the team will not be able to proceed with their work. That's the reason. Okay. At least this time you'll be compulsorily joining. At least we'll get input from you. So we can start working on those review cop points and all. Let's do it. So once in two weeks, we'll also push the other members based in US to join. Yeah, sure. I hope this time also works for European region as well. Yeah, correct. Perfect. Anything else? No, Satya. Okay. Yeah, that's all Aaron. So once you have that point said, we'll also go through that and try to see if anything updated to be done. And this call you can reschedule to this time. Okay. So quick question. Does anybody among you have access to the calendar? Calendar for creating calendar. Right. On the mailing list. Okay. Let me stop recording.