 All right, welcome everybody to the April 6th hyperledger technical oversight committee call. As everybody is aware on the call to things that we must abide by brush the antitrust policy. And then the second is our code of conduct which is linked in the agenda. Sorry. So for announcements today we have a dev weekly developer newsletter that goes out each Friday. If you have something that you want to include in that newsletter, please do leave a comment on the wiki page that is linked from the agenda. As I pointed out last week, it's very important that if you have something special going on in your projects that you do take the time to include that there so that people are aware of what's going on. So then what we have for quarterly reports, we have the caliper report that came in. I hadn't had a chance to look at it but I think the last time I looked at it there were two people left to review that. So if you haven't reviewed it yet, please do so. For the basic one. There are quite a few reviews that have happened so far there was a question in the report itself for the TOC about the, I think the CI CD. And if we were going to be delivering a unified CI CD strategy across all hyperledger projects. So I know we've been talking obviously about GitHub actions and some best practices. But I did want to make sure that we were aware of this question that was asked and see what it is that we might want to just respond to the basic group so that they know what's going on. So Tracy I know the leadership team is in constant touch with GitHub on GitHub actions and also rise doing experiments on how to give visibility into the build time so that we are also going to suggest on optimizing the build pipelines for PR sources merge things. Do you think it's right time to call out those things or do you think we are we can discuss those later. So, yeah, I think we will, we do have in the discussion topic for the GitHub actions. I think it's worthwhile for the basic report maybe just to let them know that those discussions are underway in that if they'd like to find out more they can listen to some of the past recordings to see what we've been talking about in those areas. Thanks makes sense. Yeah. Okay. And then for past two reports, the transact one, Peter did open up an issue and a pull request this week, two days ago. I haven't seen any responses from the transact maintainers on that pull request or that issue. So I think the, I think we'll give it another week but I think if we don't see any sort of responses coming up to that, then we should consider potentially opening opening up a ticket to end of life transact thoughts on that concerns with that schedule. Sounds good to me. Okay, great. Thanks. All right, and then for the Ursa report. We did Steven you sent a reminder the last time we met out to the Ursa group. However, I don't. I didn't see anything unless it's been responded to today that has come back. You sent it to Brent. And my understanding is that maybe Brent's not working on the project anymore. So we may need to figure out what to do moving forward with Ursa. Where did it. Did you hear that from brands. I know I heard that they're here. Okay. It's hard to tell I mean Jen doesn't say much about what they're doing at all. It's been in and out and tends to respond and then not respond and then, you know, process some TRs so I'm never quite sure I've never heard specifically that they're in out or doing anything, doing or not doing anything so it's really hard to figure out. So unless you heard it firsthand I would say let's keep pursuing them and see. You should answer this. All right. So that's what we've got for project reports upcoming reports are tech dye and fabric are due next week. Any questions or other comments on the project reports and the projects. I got a reminder about the Indie report. Is it should it not be on the list, maybe not. Let's open up that calendar, Ryan see what the. Not that I need everyone else to do this so sorry about that. That's okay. It was an email there I saw at the top. That said it was due. Oh May 4. Okay. Here I was thinking. Like this. Send it a month in advance, I guess. You can always send more Stephen. Well, you miss it. Well, I have the reminder set for one week in advance so. Oh, so it's May 7 now. I mean April 27. Well, I mean, the report is due on the fourth of May of May. Right. And I send a reminder, letting people know. Hey, your reports do. Okay. Yeah. I'm sorry. My I send, I mean, the list automatically sends without my intervention. Yeah. Yeah. Oh, good. Sorry. I didn't mean to spend time on that. I could have looked at myself. I apologize. Okay, Stephen, just in front of everybody. What I do is when I go through and create the agenda, I look at the project update calendar and whatever student next week I put in the upcoming reports. So that people are aware that something's happening. I don't know what's happening there because there's nothing due on the 20th, but then on the 20th one, you'll see the Saqif one show up as being due on the 27th. And then I'll also remind about Aries Indian. And on credits coming up. So I just, I basically look at this calendar and put whatever's coming up on the agenda so that people are aware of something's supposed to be happening. Okay. So any other questions or comments about the reports? Okay. So next we have our discussion items. Two topics. The first one on get have actions and update there. And then the second one will be our task force discussion. So, right. Shall I hand this off to you? Sure. First, I just want to ask who the other hyper ledger project is and perhaps they could change their name. Oh, it's been Thomas. All right. So we got put into a beta group. So we're going to look at the, we're going to look at the data. We're going to look at the data. We're going to look at the data. We're going to look at the data. The data. Which. Contains both historical data. You can look here and see that this job. Like in. February of last year took 46 seconds. To execute. And if you have a more complex job. You can see that. If you go. The top, you can look at the summary and see. You know, all the dependent jobs. And then. You know, all of the, all the errors. And then you can see the usage. And this is broadly available. This is on a per run basis. Not a org or repo basis. So this is very helpful. But. This data will need to be aggregated. And it's not easy to get. So I just want everyone to know. I'm working on it. And if anyone has questions. You know where I live. I try. Like the information will be useful for those of us who have. Jobs that are running long. To see where time is being taken. Yeah, like this one here. So. Definitely take a look at your. Your actions and the usage and see if there's anything that you might be able to do to speed things up for. Update those actions. Yeah, I don't know. Yeah, so I just wanted to ask right a bit more, but so what kind of aggregation do you actually want to do. For repo for projects for. All of the above. Right. So I want to. Have. I want to see. Like, I want to be able to go and look at hyper ledger. And see how many minutes we've used. And then at a repo, how many minutes they've used. And then at a job, I mean, that's fine. But one thing that this doesn't show. And I was told going into the beta that it's not part of their. Road map. Is this doesn't show. The builder type. And it also doesn't show queue time. So I would like at least to see a queue time. How long was this job waiting before it got picked up. And that data is. You know, not really easily available. Right. So. I have a few questions and probably a few comments right to ask. And I want to understand this more in detail. I know now we are concerned about the build times because the. Let's say the cost associated for CI pipelines are now going to be based on how long we are consuming the resources for. But at the same time, I don't think so we should be that descriptive and telling projects. How the CI pipeline should look like. Right. So there should be flexibility in for them to choose. What kind of test they would like to run and what kind of validations they would like to make before merging a PR or after merging a PR. And do we. Again, so being not that descriptive and being not that open because it's now charges. Just a question and understanding. I think we should come up with policies around the minimum set of things that we would like to have as part of any pipeline or for instance. The minimum things that we would like to at least write down saying that if your pipelines are going to take this long then follow these recommendations. There is a guideline or help available from the hyper ledger to help with you and help create your pipeline which is more optimized. So, I, I'll answer, I think you're. So, I, this is, this is something that heart has been discussing for a while we've been discussing. How do we allocate money. It's based on CI costs. And that would allow there be some incentive there for projects to keep their numbers low so that they had plenty of headroom. If you're a graduated project you get whatever $1,000 a month if you're an incubating project get 500. I don't know what those numbers are going to be. I definitely like these jobs I looked into them. These jobs timed out. I'm loading the yarn cash. So, obviously the yarn cash was a detriment to this job success and I haven't looked into like the role I wanted to see how much time there was spent on the yarn cash. You know, I don't care that much. I was just, I found out this morning or yesterday that we had this and I started looking around. As far as policies, we do have. We do have some options. For actions. We can mandate required workflows. And we're not doing that. And we, we could set that to be something. But I don't know that that's a great idea. Yeah, I was just going to jump in and say basically that I agree completely with what you've been saying and going over. You know, we have been talking to GitHub and we have been trying to get more resources. But, you know, eventually we are going to, you know, no matter what, pretty much, we are going to have to place some limits. Like right now the there is essentially no incentives for people to write efficient tests. And so, you know, we hope that like, we hope that if people write generally efficient tests, then we should never have any resourcing issues. I'll go back to Raya Rama. This suggestion probably we can make to maintainers. Most of them probably already doing this, but if there are multiple, if they've complicated multiple actions, each of which are repeating the same setup steps, and then just running different tests, you can just bundle them up into one test. And that would save a lot of resources, I think. Yeah, sure. I'm, I come from a CI CD background, you know, both that. Well, Qualcomm and Microsoft I spent a lot of time on this, and there are there are a lot of best practices that we could do that we could mandate like we hyperledger. That would mean for that to be practical. We would need people who are committed to doing code reviews of all the CI CD stuff. And we have 150 active repos right now I just don't see that as tenable. Sorry, two topics. One, I want to ask and understand more of what hard just mentioned, but the other big question rather to understand is on the report that you were showing right I think there were two columns right one was the build time and the one was chargeable time. So believable time service. Can you explain what that is the second column. Sure. So these right now we're running free, right. So we're running free actions. So there's no billable time. However, if you go into our billing and plans. You will see that. Oh, it doesn't show you. We only get 2000 a month. 2000 minutes a month, but we are well over that. This was one of the bugs that I had complained about. In our call with GitHub. Is it, I don't, I can't tell you how many minutes have we actually used what platform. I can't tell you. So that's why the number zero room for the questions or. The time thanks for explaining that right and the other question or rather. Trying to understand is what I just mentioned. So are told about incentivizing projects based on how efficient their pipelines are. Do you think this is something we should discuss as a TVC or is this something which. Yeah, that's just a question. So when I think of it from a TVC standpoint, I don't see anything else we can say other than saying that. I don't think so we from a TVC are right in allocating how much each project should be using. Maybe we can talk about how much graduated project is given preference versus others for trying to set up. Rather, I would mostly focus on the optimization part of it and giving enough assistance where needed. Right, maybe not like all to all the 150 deposit trace, but we should have something in place to help the projects in case they need one. Well, I agree. The discussion is ongoing. I don't have the answers today. Yeah, and just maybe Dave before you jump in. I wanted to point out a couple of things that I think one we've been doing in Dave's best practices. Before we we went through the CI CD, what might be good best practices for that the other one was that the other one of the other task forces that would suggest it was a CI CD best practices. Task force, and we didn't get enough people setting up to express interest in that maybe that's because this hadn't happened yet. Maybe that one would have been a priority had we had this already going on. So I think there are, you know, some things that could potentially fit a rune into what you're suggesting is, you know, in those task force to those two task forces right, we could spend some more time on that. And talk about kind of the guidance that you're suggesting that we give to projects on what they should do with with CI CD. So Dave. First, just a question to ride based on the comment about we're well over the minute limit was it 2000 minutes a month, something said. Yeah. And how could that be are they just not enforcing that. Right. So this is do we expect them to start enforcing it at some point. Well, we were well over the limit for runners. You know, we were way up well over the 20 runners at once limit forever. And then with no notice. They started enforcing it. And that's what I'm getting at. If they start enforcing this, we're going to be really feeling it right. Yes. So I, I don't have an answer. I'm very much interested in an answer. I couldn't get them to say anything about anything yet. We are talking to them next week. About what programs are available. One that might be available is an enterprise plan. I experimented with an enterprise plan last week. On one of my test orgs. And it didn't give any better visibility or control. CNCF is an enterprise. And all of their orgs are under that enterprise. So if we had an enterprise account, we could say, put fabric in one in a hyper ledger fabric org. And then, you know, we could cap the spending for that org. But yeah, so that's kind of, we'll go to my next point is, and I'm not saying we should do this, but in theory, couldn't we put every project in their own org and then each project would get 20 runners. We could. There are other downsides to that as well. Okay, so when we when we talked to GitHub, can we say this is what we might do. But could we compromise instead of could you just maybe double the number of runners. So instead of going from 20 free runners to 200 some free runners, could you just go to 20 from 20 to 40 something like that. I don't know if they have, they have mechanisms to do special cases but I will absolutely ask. I think this is next week. So I promise you this has been at the top of my mind since everything went crazy. And I will report back I promise. Okay, and then my final comment is the prior conversations about leasing projects and required workflows and things like that. I don't think we should do any of that I think we should leave it up to each of the projects. So for fabric, I know we have some egregious use of CI that with a lot of overlap and running across different platforms and running similar functions and different tests. So I know there's some egregious use that we plan to streamline. And I suspect other projects are in the same boat so I think we can get things down a lot without really bringing in too much impact the projects. So when I was thinking about required workflows. To me, something that would go in the, in the workflows would be the DCO bot. Right. I don't see it anymore. You know, or something like that or a spell check. Not. That doesn't replace as far as I understand. Now, you know, if I were to add a workflow here, it says it runs alongside the repo and branch merging workflows. So it's not a replacement. If that's, if that's what it sounded like. So I'll put this link in the to see to see chat. Yeah, my main point I guess is that I think fabric and I suspect other projects can do a lot to streamline our CI use. And as we said last week when everything was free and there was no limits that, you know, led to a lot of a lot of bloat in CI, there was no penalty for having redundant workflows and things like that. So, I think we can do a lot to reduce it while you're doing your work with GitHub. That's it. Thank you. So practically speaking when they enforce the limit, what does that look like this just stop launching runner so. Yeah, that's what happened until when I guess they're about the minutes would be the more concerning one. If they're forcing that. Yeah, so they're already enforcing the, the runners. And that's what got us into, you know, where we've been for the last couple of weeks, right. Yeah, but that means this gets queued, right. Right. Right, but if they start enforcing the minutes, then they could just say you can't run anymore for the rest of the month. Right. Exactly. That would be extremely problematic. I agree. When I one thing that I could do is put us in an enterprise plan. You get a 30 day trial. But when I did that on my test org. And I didn't see what I wanted. They have a thing that says, when you leave the, the free trial everything will be just like it was. Well, that wasn't true. And I had to go and manually undo some things. And I think if I were to, you know, go here and click the, I want a free enterprise trial button. And in 30 days we didn't come to some configuration and let that happen. When we left the trial, there would be a lot of manual work to undo going into the trial. So in an emergency case, I can start a free trial that I give us 30 days. I can start enforcing the actions minutes. But that's all you want to give us 30 days. So. Yeah, but don't do that. Yeah. And my, my thinking is that it's the only reason that it would stop at 2000 minutes is if you put some sort of spending on this account. Right. You never got charged anything. Otherwise you're going to actually incur cost would be my guess right like I would expect them to basically say okay 2001. Now you're starting to get charged for the usage. Right. And when I explored this with my other org. You can't. It's not easy. If we were to go to team, it would be $4 a month per user and we have way too many users that would only give us, you know, this says it's free for public repos. But I don't know what that means if we're capped at 2000 minutes a month. You know, another thing that we're going to run into is for get a packages. We're way over this 500 make limit. So I don't know if they're going to start enforcing that at some point. So there's, I wouldn't have given this anymore thought this year until they turned on the runner limits and caused chaos so now I'm very concerned. This looks key to me, not just on the GitHub actions but also on the packages size, given the number of projects we have under the same organization. I'm talking with respect to even the enterprise plan that they're showing. Let me see. So packages. I'm trying to see where it was that I found the. I don't remember where I found the usage per repo. But we were so sick look here. So here are the repos that are there over the cash limit. You know, I was using 23 gigs of 10 areas is using 20 of 10, you know, so I don't know if they're going to enforce that. I mean, I don't know. I have so many unknowns unknowns at this point. I'm thinking about disabling the cash and the cacti CI jobs because they had this bug for months where they throttle the bandwidth of the caching operations to 0.0 megabit per second. And that's one of the reasons why we have jobs in the CI that are stuck for six hours before they time out because all it does for that six hours is downloading the cash with this dial up modem speed. And, and they just times out and never even finished it. So, I recommend everyone else also looks into that if, if some of your jobs that time out, or because of the cash. Thanks, I had noticed that yesterday or this morning when I was poking around at the long running workflows that you some of the cactus jobs were running for over a day of compute time, and it was just waiting for the yarn cash to repopulate so. All right, any other comments at this point. I think right is on top of knowing that there's a lot that needs to be looked at and understood and I think we're will continue to get reports from my as and when he learns new information. Thank you right for the update. You're welcome. I think the next thing is a task force discussion on the security vulnerability disclosures. This one is for you to take on. Take a straight. Thanks, Tracy. Everyone. Thanks. I mean, so, yeah, before we get started, I think I wanted to recap what happened in our previous discussion because that was about three weeks ago from now. That we had was on 9th of March, and that was excellent call I would say we had various topics on that day. We discussed a lot about policies that we can enforce from within a pleasure foundation we discussed about the embargo list we discussed about the activities that the project representatives can are supposed to do within the security for subgroup that we were talking about. And we also discussed about some of the ways we can inform the security researchers when they when they impose, or when they report their issues. And that was a good discussion so I would say, please go on listen to the recording on 9th of March. If you missed the previous discussion. And I know towards the end of the call on 9th of March we had two open topics which were not completely discussed, which we are supposed to discuss. And once we have these two topics as well. I'll go ahead and put all the suggestions that came in through these meetings, the task force discussions through PRs. And I also see it hard posted a link on Google now. Hey, so thanks. So I took a bunch of these suggestions that from that meeting and put them together into a sample template for a vulnerability disclosure list. So you all can take a look at that, you know, I finished it up last night. So I apologize for not sending it out in advance but it wasn't done. But you know I wanted to try to write something that was based on the open SSF document, or sorry, the open SSF doesn't have a document I wanted to make something that was based on the open SSF recommendations. So you know customizable for different projects. And the idea is that the projects could just edit out like the stuff in red and the stuff in brackets here, and have a reasonable vulnerability disclosure policy that being said this is a first draft. So I probably made a bunch of mistakes. So it would be really awesome if you could point those out to me. And really, I would like to be able to take this back to the open SSF vulnerability disclosure group, and have them also give us an okay on this. And then, you know, we can go from there. So, yeah, I don't expect people to have, you know, lots of immediate comments on this. You know, I literally just shared it. But, you know, please, you know, over the next week or so. We would love to have your feedback. Are there any immediate questions. Anyone. I think we need to read through this and then go back to meeting recordings and then map them. We are missed anything. Thanks heart for putting this. And we also discussed about creating the governance of repo under to see putting these guidelines. Yeah, I mean we might also contribute this back to the open SSF. Because they don't really have a template that you can use for a vulnerability disclosure form right. They have a bunch of information on what you can do. As a policy but they don't have like an example policy a project can just take and pick up. And that's what we're trying to do here. So, you know, customize bullet a lot of ways that projects need customization. Like, for instance, I know base you has something like six or seven possible bug intake routes. But most of those the other our other projects are not going to have getting through the document that looks great. I think we should use this in cacti for, especially the the part about the response team that has at least three maintainers. So, thank you hard for putting this together. This looks great. Well, thank you for the kind words but I'm sure it needs some heads. So, please. Yeah, take a look and you know, particularly folks like our know who are security conscious. I don't know the spot are no. No, no, it's okay. I will definitely have a look. But yeah, so. So what I'd like to do is to get the folks in this group, okay with this, and then go back to the open SSF with it. So, that's the plan. I think that's good plan. Just think enough openness and just developed disclosure policy but it's the other way around. They describe how they will handle that they found in other people in projects in general, you know, projects, typically, because you know there's some activities in open SSF, you know, is focusing on trying to secure open source at large. And so they have these stuff what they consider to be like the top projects. And they just try to, you know, find vulnerabilities across the board, and they figured, well, it'd be good to actually explain our policy how we will handle the disclosure of their abilities that we find through activities. But it's kind of the opposite of this right it's on the other side of the spectrum. And we have a lot of projects that have custom that have a there's a lot of need for specialization. I don't even I don't think I know all of the possible base who intakes, right. Yeah, if I mean I guess it's not fair to expect people to comment on this now but you know please read it over comment, you know feel free to reach out to me and ask me questions. Discord or email whatever works for you all. Yeah, we can go from there. But I also see you listed down the private patch deployment infrastructure. I wanted to bring up that topic for discussion today on more time. Because we spent less time in the last call. Go ahead. I mean, it's it's your, your topic, you're the leader here so go for it. No, I think this is good. So, yeah, this is what I wanted to bring up in today's call that we did discuss about the private patch deployment infrastructure where we raise PRs and do the development in a private manner which is not publicly disclosed. And only when it is much it's available to the public and all the testing for the issue itself would happen on that private branch or private patch. And I wanted this to be a guideline for the projects that something like this exists and something like this is what we encourage for all the projects to follow, especially when it comes to the security issues. And it will also go in as a guide for projects and best practices. So, and if all are okay, then maybe we can, we can do this. Talk about this and best practices as well. Other thing I wanted to bring up in today's call for discussion is on the governance of these, these policies that we are going to set up our governance around the best practices that we have put up from a security standpoint. The reason why this plays important role is we want to incentivize projects that follow best practices from a security standpoint. And we want to build a confidence among the users of hyper ledger projects, and we want somebody who's using hyper ledger projects to know that these projects are weighted through the strong guidelines and strong recommendations. So they are following a high quality, at least when it comes to the security practices, and they can be ensured that there is no compromise when it comes to security. And we want to avoid risk, which might arise in, sorry, let me put it in a better phrase. So we want to do a risk management, we want to make sure that in an unforeseen consequences, the impact is minimized, the issues are minimized. So we want to make sure that at least our governance rules have best practices around those around those issues so that we reduce the blast radius right in our worst case. So we need to have that infrastructure in place and make sure at least the projects are ready to be used in critical software environments for any of the use cases that we have. I know there are a few certifications that some of them in at least couple of projects that Daniela mentioned that hyper ledger projects have gone through for use in governments, at least in US right. So that's another reason why we need the ordinance. Now, apart from these. I know hard brings in high security Calib I mean I highest importance to the security, because he himself being a security researcher so it's also a focus from the foundation that we need to put up best practices around it. And having said all of this right we want to minimize any extra work to the maintainers and just putting additional. I mean without putting additional burden to them on how we get these measure how we get these measured. And at the same time we don't want any of these reportings to be available in public right and that we still are in that these projects are following the best practices that have been recommended. So, that's just stuff why it is important and how we should play around it. I don't have any proposal put up in place. But what I want to bring up today is, if anybody have any recommendations or our considerations on how we measure a success of a project. When we have all these governance rules put up. And I want to also talk talk, I mean tell that we are not really ranking projects among like we are not comparing one project with another project and we are not ranking them based on how well the other project or how bad the project is doing, but rather let's consider this as a way of measuring how good a particular project is right so without in comparison to another project. So there is no comparison needed across project but it's rather imagine it to be like an absolute rating that a project can achieve, and it comes to best practice. And this is an open question, which I would require comments on is Tracy. How is what you're suggesting different than the badge used to be the CIA badge I think is now called something else. How is how is what you're suggesting different. So, from see I bad, the, the best practice guidelines that were there. It focused on few questions that were to be answered by each project. And I don't think it covers all the aspects that we have in our recommendations. And so, for instance, let's say, we have a recommendation that says the project maintainers were supposed to be responsible on the security side should be responding back or at least try. They should try age and say, if a reported issue is a security vulnerability or not within a given amount of time right. So there is no way of us measuring if that's being followed, unless it gets escalated or unless it gets told to us, or unless somebody monitors it from the foundation. I mean that was just one example or one scenario, which we can consider it's difficult to measure. Or I think one of the thing that that came up in the previous task for discussion not this one the previous security task force is that assume that all projects are having the good intent in terms of making sure all the recommendations are followed, and then assume that they're all following the best practices unless somebody raises a question right. But that won't give us the accurate measure or if there were no incidents that were reported for that project. Peter. Yeah, it is hard to define some sort of regular check for this that doesn't turn necessarily into each or we could. There's a number of reported vulnerabilities. But then that's that creates the wrong incentives because if a project has more reported vulnerabilities then another it could just mean that more people are using it or more people are researching what the issues with it are. So ultimately it could actually be that it has a better security hygiene. Maybe some sort of crowdsourced checking process with the main twin in the maintainers so crowdsourced within the maintainers there every month. The security responsible or the security response team of one project could go and check on another project and then report their findings. So that way, someone else would sometimes at least look at what the other maintainers are doing it so if there's one project that who's doing a lot of sort of mistakes in terms of security then it would get noticed sooner or later. Like it's this is as much as I can figure out for doing something that is not too intrusive and also doesn't take too much time away from from everyone all at once. It would be just a round robin kind of thing this month it's this maintainers doing that project. Next month is some other maintainer doing some other projects. And then of course it could be on a volunteer basis. So that we don't just throw every maintainer all at once into the mix but only those who feel like they can spare a little bit of time and if they feel like they know enough about security to take a quick look, which is not much. But yeah, it would probably have to be an opt-in thing. And part of the first thing for this to at least to even evaluate how successful it would be. Just ask the maintainers and sort of a referendum or survey. Would you actually be willing to volunteer to do this? And then if we get, I don't know if you get 10, 15 maintainers who say, you know what, I wouldn't mind doing this once a month. On some other projects so that others will also do it on my project. Then I think that's that's a start. Of course, if we don't have any volunteers then that's a different question. Yeah, I wonder, like, two things. One, are we, what's the goal of this? Is it just to, is it to determine whether or not they followed the processes for a CBE or security issue that was reported? And if so, why, why is that just not part of the process, right? Like, why is that not just part of once you finish the work to figure out the security vulnerability and report it if necessary. One of the things that you're looking at is the date that it was reported, the date that it was closed, and if it's over the 90 days or whatever the number is that we've cited on, then then we need to have a discussion at the point about why that was, why did it take longer than the 90 days, what could be done differently and move forward from there. I just feel like we're making this maybe a bit more complex than we need to, and maybe that's just because I don't understand the reason that we're trying to do this work. Okay, so quick, thanks for that question, right, so quick clarification. Of course, we don't want it to be a policing, I mean, we are not a police here to do certain things. The only reason why I wanted to bring this topic is so that we increase the confidence in hyperledger projects, and we can claim that these practices are followed, and the users of projects have assurance that there is high standards of quality when it comes to security. I agree, like some of these information we can extract from the reports, but then we need to just be clear on who does that and whose responsibility would that be. Any thoughts on that, and from anyone. Sorry for not raising my hand, but it seemed quiet. Yeah, I think we want to have some metrics around, you know, if a project is not, you know, following responsible disclosure practices, you know, do we want to invest in like security audits and other things for them. You know, maybe not. So, you know, we want to have some incentives for projects to follow best security practices. You know, I don't know how punitive we want to get. And, you know, certainly for labs or, you know, certain projects and incubation, you know, it's probably not as important, right. So we will need some flexibility here. Due to the nature of this, this reported issues right should it be the staff would extracts this information and probably they make a decision. You know, we can. Sorry, go ahead Peter. I think it's all to meetable that part of it, the the part that Tracy said that whether we are over the 90 days for vulnerabilities or not. So that part can be automated and then it could just run as a job once a day or once a week and then generate a list of over the findings and then send that out somewhere posted on the discord channel or an email to the mailing list. Well, note that we are over time. Right. Yeah, we can continue this discussion offline as well. Thank you everyone. Thank you for attending everybody will see you again next week. I think Bobby, you're up next if I'm not mistaken with the task force discussion. Sounds good. All right. Thanks everybody.