 All right, welcome everyone to the June 2nd Hyperledger Technical Stone Committee call. As you are all aware, two things that we must divide by the first is the antitrust policy notice that is currently displayed on the screen. And the second is the code of conduct that is linked in our agenda. So, as far as announcements go, we have our standard dev weekly developer newsletter that goes out each Friday. If you have anything that you would like included in that please leave a comment on the wiki page that is linked in the agenda. And then does anybody else have any other announcements that they would like to make. So for quarterly reports, we have the hyperledger cello report that came in this week. As far as I could see, we only had the one question that I put in. And my wife has responded with that with an answer to that so I don't see any additional comments but I also only see that two of us have had a chance to review this so. Given that I will leave this on the agenda for next week. So that everybody on the TSC will have a chance to review that particular report. And then the firefly report is coming due next week. And so we're looking to see that coming in any comments on the quarterly reports. As you guys may have seen, I did send a follow up email to quilt. Their quarterly report was coming up, I think, next week or maybe it was this week. And so I sent them an email asking if they wanted to keep the project in a dormant state or move it to end of life and they responded with it. It was probably time to move it to an end of life state. So I created a PR specifically in the quilt repo and it was approved by approved and merged by the maintainers so I just wanted to make sure that we were aware of that and just get a maybe a game may vote just to confirm that we are moving that to end of life from the TSC any comments before we have that vote. Oh, just for the record, the quilt project came out of the web payments working group the W3C and did a lot of interesting things for identifiers for blockchains that track financial assets. And it has done a lot of interesting thoughts work that affected what we chose to do with identity, especially when we worked on cross chain identifier work for the did specification so thanks to everyone who worked on the quilt project and brought it to its current state, but it hasn't been active for quite some time as Tracy very well knows. And so, it was good to see that folks were around to reply to the pull requests to move to an end of life state. Yeah, definitely thanks Nathan. As, as we've said, in the past to the other projects that have gone to end of life states. You know, obviously thank you for being part of the community to the hyper ledger quote folks, if they were to ever listen to this in the future. And thank you for the work that they have done on that project and working towards bringing inter ledger to hyper ledger. Any comments. Okay. Right, would you just take us to a, not a roll call vote but just an approver obscene type vote. Sure. Well, nobody, there's no motion to vote on and nobody seconded the non motion. Oh, thank you right. Yeah. It's so early here. Peter second it. I'll make the motion as I help approve the full request. Okay, I'll second. So, in the matter for the TSC moving quilt to end of life. Is there anyone who wishes to abstain from the vote. Is there anyone that objects or wishes to vote nay. All in favor, say, I. Motion passes by voice vote. Thanks for, thanks for keeping me honest to. All right, so in the discussion items I just wanted to call out specifically that we have had two PRs that ended up getting merged into the TSC repo. I wanted to rename CII to open SSF. And the other to add the security related mandates for graduation to the incubation extracurricular. I wanted to see if anybody had any comments here at the TSC about those particular PRs. Because I know that they didn't go through a formal vote here but really just went through the GitHub PR process. So if anybody has any comments now it's probably a good time to have those comments. Okay, so that is as far as I know the end of the normal TSC meeting but if anybody has any other items that we should talk about before we hand it over to Jim. Raise your hand and let us know. All right, so Jim I think it's up to you to work us through that the task force on project health. Thanks, Tracy. Okay, so we didn't, we didn't have a off cycle meeting. But we did have a lot of good discussions in the last task force meeting in this TSC time slot. That was in April. So just as a refresher, we went through the other dimensions that we believe are important to reflect a project's health. And that's in this column. It's in two large categories, focusing on community side or focusing on the code side. On the community side we had growth as one dimension and diversity, retention, maturity, friendliness and responsiveness. On the code side we had usefulness, production readiness, fundamental metrics, docs and amount of innovation. For this call, I'd like us to discuss, given these dimensions, what type of data do we think are useful to support, to support them so we can get some measure from data to reflect on each dimension. Before we dive into that, any questions on any aspect that we discussed last time. So altogether I see there are 11 of them. No, that's great. Okay, so let's dive in. So I've got a bunch of ideas. So we can go through them and if you don't think they make sense, we can strike them out. Or if you have additional ideas then we should add them in. So, let me put this in the edit. Okay, for project growth. I would like to be reflecting how many people are becoming newly interested in this project, and they're so interested that they start contributing, either in discussing new features, discussing drawbacks, or even contributing code. So, I was thinking we can tally the definitely mostly driven by GitHub data. Number of contributors to the code base scraping the PR data and issues data that where people log requirements. So I'm going to go over just submit a bug report. Don't know if there are specific things we need to look at for both PRs and issues to get real contribution versus, you know, casual one time thing and then they never came back. There's a lot of work in this area to try to use PR and issues data in GitHub to define who are the casual ones who are the strong contributors and who are the recurring or repeated contributors. So all that work can be applied here. So, we want to look at this court, if possible. So here I think we need to consider technical feasibilities. It's possible to to analyze discord and see for a particular project, which user, you know, keeps showing up week after week and have sustained discussions. So that can be a good indicator of contributors that are mostly contributing ideas in discussions rather than code. So do do these make sense for growth. The heart has this handle. So first of all, a somewhat orthogonal practical question. Where should we be commenting in chat for this discussion. We want to do discord but do we want to do in the TSC chat or the, the related task force chat. Let's go ahead and use the task force chat. There's one Tracy created for the health data. Okay. Yeah, I see it. Great. Secondly, I think this is a great start Jim, I would also include other things like documentation, you know, our people working on that. And, you know, I guess, community outreach so our people starting to do like meetups, you know, is there, you know, outside evangelism I guess. But outside evangelism takes place. Thanks. Can you clarify the, the meetup, the outside evangelism aspect, you mean people leading for example, in a particular region meetups that focus on on firefly. Yeah, absolutely. Okay, understood. Like our people trying to grow the project basically. This is like, you know, just things we know that generally work to help get more contributors, like these things are people doing them. Yeah. That makes sense. That's a good one. And I think this will be mostly just asking people right and have a place to, to log them. Sure. Well, I mean, these should all be logged in our events right. That's true. Not there always I mean, it's certainly people occasionally do things on their own. But whenever, whenever people host a meetup or something, you know, we encourage them to tell us, right. So do you think the, the outside evangelism is a it is data to collect it. Or it's just mean soon and because at the end of the day, if at the end of the day it should be reflected in the growth of the contributors in these areas right. Well, I think it's a, it's a leading factor for those right. So if I want to be able to enter, you know, if I want to anticipate how I think growth is going to be in the future. I think looking at that is really important. Does that make sense. Yeah, that makes sense. So the project is not getting exposures through these meetups and we probably should encourage them because it's, as you said, it's a good way to lead up to the, to the appearance of additional contributors. Yeah, so the, the earlier stuff is sort of, you know, has the project grown. And like the meetup kind of thing is will the project grow basically right. Yeah. Yeah. Jim, I think maybe to figure out who the new interested individuals are, if we could do some sort of page view count on the contributing that MD file, or if there's a different contributing. And the document for the particular project. That could be a way of seeing how many people have expressed an interest by looking at the contributing guide instead of just reaching out on discord or doing a PR, because that could then tell you how many people end up converting based on somebody goes to the contributing guide. Do they actually end up submitting a PR or not. So the contributor guide. Is that in our wiki or is that So, so according to our what we call it, but the folder requirements were all each of the projects is supposed to have a contributing to MD file. As a initial entry point. Some of those actually have text directly in them and some of them point to a contributing a file on their read the docs or something like that so obviously that's that becomes a slightly bigger challenge when you're redirecting to a different page but yeah, I think, you know, we at least have the requirement that everybody has a contributing to MD in their repo. Yeah, I think that's a, it's a good idea. Maybe able to collect page views in some of them may not be possible in others, depending on where they exist. Cool. Anything else on this particular one I know every as a TSC this is our, this is our main interest is the growth of the community. So I think any ideas that we're not thinking we should talk about them. Okay, so let's move on to diversity diversity is also very important. We want. This is for tolerance right when one organization for whatever reason goes away at all having diversity will keep the project life. So the number of obviously number of orgs contributing to the code base, that's probably the most obvious type of data we should collect. Do we have anything else that we should think about. In the fabric reports I usually put percent of commits and contributors from each organization. Well, I usually focus it on IBM because that's by far the most usually so I'll say like how much percent is IBM how much percent is other. Yeah. Thanks David that's that's a good point. David. Another form of diversity could be language translations for example and how many languages are documented is documentation available and although I guess I could also fall under perhaps you know, friendliness. Yeah, that's that's a good one. David. I think contributing issues is also important. I'm sort of trying to measure who's contributing to the roadmap. Not only who's contributing the code. So that would be also issues and this board. Yeah. Yeah, okay. But the thing I want to emphasize is that, you know, it's people who are also contributing to the roadmap, not just the code itself right. Yeah, if, if, yeah. Yeah, that makes sense. For the projects that have the RFCs that could be a good repo to take a look at to see who the contributions are coming from. RFCs or enhancement requires. Yeah, yeah, I just know a number of our projects have a distinct repo for RFCs. And so for those projects, if only one organization is contributing to that repo then maybe that there's not the diversity for the roadmap that hard is talking about. That's a very important. Thanks, Tracy. Yeah, we actually had, as an example, Firefly had a separate example repo for Firefly improvement request fire. And we've had many opened by, by other orgs, very significant architecture discussions. That's a good one. Okay, so if that's it for that one let's move on to retention. So this is about keeping keep, keep attracting the interest of contributors. longevity. So having, having the contributors that we recorded from those efforts, do they continue to show up month after month, both in PRS and discord, I guess the same type of data but a different, but a different, a different way to analyze them. And then we do something around so for retention I previously had in the project reports the number of people have become inactive in the last month, meaning that they haven't made a contribution in over six months, as well as the kind of number of repeat contributors that you have. If somebody's only contributed once then I think that's, you know, it's good but it's not what we're looking for somebody who continues to come back and contribute. Okay. So let's continue of maturity. So maturity is mainly as we're going through the analysis of these dimensions maturity should be a should be a sort of a qualifier. So that maybe on the particular aspect, it should be treated differently for young project versus a versus a mature project. And maturity is probably a subjective thing that is not directly correlated to age, but we could definitely look at one was the first line of code or first commit was started. That's the start of life. And frequency of releases could reflect the maturity of the project. I don't know if that's, we probably can have a discussion about this. The idea was mature projects tend to have more regular cadence. And they usually can do the release on that on the cadence active projects may have faster cadence but they tend to be more irregular, more on demand. And certainly less active and less mature product projects will have less amount of releases. So, longer cadence between releases. Does this make sense. Do we think that's a fair way to analyze maturity. I don't know if you have your hand up, but it was probably before this question so. Oh no it was for this question but thanks Tracy. Yeah, so I think the frequency of releases is a good metric of maturity it's not necessarily like, you know, are you releasing every week, but you know do you have a schedule and are you roughly adhering to that schedule. So I think that's a great thing Jim and I, you know, you should leave it in. On the other hand, I think other things that are great indicators of maturity are documentation. I consider, you know, I don't consider a project mature unless it has, you know, at least reasonable documentation. Finally, like, you know, certain best practices come to mind. Like I would not consider a project mature for instance if it were not following the best security practices, things like that right security practices are the ones that you know come to mind to me but I'm sure there are other others that that people could think of and suggest. That could also include the test coverage that the way any new code gets added or the release process. The definition for somebody to become come maintainers from committing single purchase. She will automatically calculate test coverage making sure they're not dropping that sort of thing. So since we are looking at so maturity of the project it would also a mature project would have so best practices can span across spectrum. That would include the way they engage with the community members. The last part you said can you repeat that. So, I was thinking more in terms of following best practices that will have a roadmap for somebody to become a maintainer. If they want to start contributing. Gotcha. Yeah, I think the question that touches on something that's even more important here but it's harder, a lot harder to measure and that is, does the community have some cross organizational way of organizing their progress. I mean, they have some sort of roadmap or active and regular discussions on what should and shouldn't be built and when. I don't know that that's a metric, or if it's kind of a pass fail of is there such a thing that we can ask or watch for. I mean, is the is the leaders involving the community in defining the roadmap. Like the example here is I show up with really great new idea to improve your project where do I go who do I talk to, how do I find that out. What's the, what's the cross organizational way of coordinating should it go in this sprint or should it go in next quarter. Yeah, so we have. I'm somewhat adjacent to friendliness to contributors, but I don't think it's capture anywhere else. I'll leave this in in here. That's a good one. Thanks Nathan. Okay, so let's move on to the next one friendliness. This is for contributors potential contributors who have good ideas. Do they feel they are being accommodated. And they can have meaningful discussions. So, some obvious things is this project to find good first issues. These are things to help potential contributors get their hands dirty. Let's go through the basic practices of doing a build and making a small fix and then propose a PR. As easy as possible when someone has been able to go go through this end to end it's much more likely they can, they will contribute more in the future. So that's, that should be a really good thing can be measuring our new contributors being onboarded week after week. Next one is a bit subjective. Can you ideas be accommodated. Even even if they may lead to forking up the code base. This was actually discussed on the last call. Having robust discussions. To solve concerns from the community, rather than brushing them off, encourage them to create a new fork. If this is, this is, if this can't be accommodated in the same code base. And that reflects, you know that the core committers really care about those new ideas from new contributors. So I think this is actually really important, and you're, you're touching on a core fact here Jim. So one of the things that I really like to see is sort of how far out of the way are existing maintainers willing to go to help new contributors. And sort of like if a new contributor submits like a flawed issue or flawed PR, you know, how far are the maintainers willing to go to help them right. So I will credit Peter as someone who does an excellent job here when someone comes along to cactus with something that's not, you know, perfect. Peter will usually go out of his way to help them a lot. It's really fantastic and that's really good for the community. So one metric for for sort of this. And then sort of indicative of this is, you know, what's the time to land PRs or commits or issues, but mostly PRs for sort of core maintainers or core contributors versus new contributors right. You know, are the, are the maintainers you know addressing the actions created by new contributors. And I think this is really important. I hope that made sense. Yeah, it definitely does. We're, we're, I think we're actually talking about a, an idea I don't know if we actually talk about how this will be captured. The type of data that needs to be collected to reflect this. So I think you could correct, you could collect this, you know, in some way, like, you know, the PR or the issue analytics from new contributors versus existing contributors right. Now you'd expect it to be a little faster for existing contributors than new contributors. But you don't want it to be like galactic for new contributors right where if someone like creates a PR and it's just never looked at right. Yeah, like PR is closed without comment is another like thing that's, you know, that I personally discourage. Yeah. Right. And I guess, sorry. I was just going to say, I added the last line to kind of capture what heart said there and I also add it to the first line. So it's good first issues, a help point tag. I think if you have help wanted that implies that you have somebody who is willing to help you in going through the PR process and working through a particular issue together. So I added that to the friendliness as well. Yeah, so that thanks Tracy. Yeah, this makes sense. Thanks Tracy so I think hard covered one of the questions that I wanted to ask that how long a PR book take from a new contributor and how far is the company and are going to help them out. The second part to that is with with respect to taking in inputs from the new contributors are taking in new ideas right that's topic that Jim also kind of brought up. I know in the past people used to ask questions on rocket chart and that used to lead to discussions and eventually maintainers if they like it, they would go and add that feature, or if they don't they would probably discuss and say that we are not going to do anything with that. And then we decided that we'll go with raising issues with all the repositories, but I don't think so that's consistently enabled across all projects because some of the projects are still not using GitHub issues as a source for taking in inputs. I know some other some of them are strictly following it. So, I think that's also a measure of friendliness of, there should be a consistent way where somebody can come in and say this needs to be done, and probably start a discussion around that. So, run your saying is the project defining a clear way for anybody to to raise a concern or an idea. Sure, we can consider that I know some of the projects may still be preferring an external to like chair. So, another thing that I think could be useful for measuring this is sort of our PRs and other issues and things addressed in a somewhat FIFO manner. So, you know, basically like, maybe they're not getting merged, but you know our people sort of, you know, commenting on and reviewing things in the order that they're coming in. Yeah, like this right. So nobody should cut the line because you're a core committer. I mean, obviously not all PRs are equal right you know if there's like a security fix. It's not the same as you know, an improvement right, but in general the some PRs are good and some PRs are, you know, less good right. But in general there should be some sort of process of addressing things in order right. Yeah, that's awesome. Great ideas. Right, so I think since we are talking about friendliness should we also consider the project meetings I know all projects are advised to have their own maintainer meeting or maybe committers meeting gatherings that happen at a bi-weekly or once in a month. Other project meetings. Right. This captures your idea. Right. Okay. I just wanted to add to the address in order. It's not so much about that they're addressed necessarily in chronological order, but that there is a defined order and that every organization has fair access to that order. An example of what we've seen in the past where a process is broken is where the primary organization where all the maintainers are set their own internal corporate roadmap and everything for that happens like clockwork on schedule. And the person who comes with a change from outside of the organization, they have to wait through an undefined other process. So you might see a pull request or pull request to get merged in two or three hours when they're part of that org, but three or four days if they're outside. So, Nathan, yours, you're saying there should be a, a well defined practice to define how the party list of issues to be revealed. Yeah, but but it doesn't have to be chronologically just has to be fair and, you know, obvious. Meaning, if I come with a critical security patch, of course that might get merged well well before, you know, someone's here to add a new API. Thanks for adding that, Nathan. Shall we move on to the next one. I feel more and more that responsiveness becomes very similar to friendliness. Maybe we should consider merging them, but responsiveness is mainly how how quickly things are done. Especially in terms of when they come from outside, quote unquote outside outside of the core maintainers group. It's very similar to friendliness because when you are responsive, you're demonstrating a friendliness to new contributors and of course you are. You are responsive to within the core maintainers rule because everybody wants to get the right thing done. So, shall we go ahead and merge responsiveness and friendliness into a single one. So, I think this is useful to separate I think Nathan illustrated it really well. When he mentioned that there's, you know, there can be a core group of maintainers that are very responsive to each other but not necessarily responsive to outside contributors. And I've also seen projects where the maintainers are very slow to respond to each other. So I think it's fine to have this as a as its own category. So basically this is looking across the board, how in general things get get done. Exactly. Yeah. Okay, that makes sense. Let's keep them separate still. So that applies to both PRs and issues in GitHub, and also on questions that are not responded to how do we know if that's even possible to measure. Can we can we tell the difference between some somebody making a comment, or just a statement versus asking a question in this court, and if there's no reply. This, this seems to involve some sort of AI to actually understand if questions in this court are being addressed or not right. Nathan, you had your hand up before that question. I don't know if anybody has a good answer for your questions. But I was going to say I don't have a good answer that question I think right I'm not sure how much you can automate that though, there might be some best practice of how you we should acknowledge every post in some way and we can train the maintainers on that so that it could be measurable. It just I don't know I don't have any ideas on how we would do that. The thing I was going to say is I think Jim's comment hid a real important jewel, which was anything that can be measured across everybody who contributes I think is more of a responsiveness metric that a friendliness metric. And it felt to me like, we've added a lot of things to friendliness that might be able to be in responsiveness because we can't measure them on a who's new versus who's established basis, especially when we consider contributors who might only be doing drive buys on the phone but it might be very, very active on the calls or on the email list. So, it feels to me like focusing or pushing more of the metrics over the responsiveness side might give us a more accurate view of what's going on. And at least in the case of like the most special treatment or there's a roadmap in place some of the things seem more like fairness or responsiveness, as opposed to maybe friendliness to new contributors. Yeah, that makes sense to me. That we could, we could basically be be more objective measuring overall responsiveness. Cool. So Nathan, I agree with you. I don't think we can get, you know, perfect information on this. But I think we can, you know, cut some of these responsive, we can handle some of these statistics with respect to friendliness by looking at like statistics for maintainers versus statistics for contributors, right, or statistics by company right. Because we have all of that information. So, while some of these may be difficult, I think it's probably still possible to gather them and you know I know that a lot of this stuff. You can do with a couple clicks in LFX right off the box. I say to that, yes, that sounds great. Awesome. Okay. Yeah, by the way, I see people are actually doing co editing so that's great. I know right has been encouraging people to do that on the on wiki. So let's move on to to code and see if we can finish everything on time. Usefulness is the project being adopted by customers and high kickers are people actually using using them. I was actually thinking about this. We were announcing end of life for quilt. We actually know if they're being used by customers would be great if we if we do and say, yeah, it may be end of life but it's it's it's still got strong adoption. It's just not moving forward in terms of features. That's very different than a project that's been in high pleasure and no one uses it right so I'm really curious, how do we get this. So, obviously, we can ask and have people tell us. How do we do that is the question. Several things we could use. Docker pulse. But this can be skewed, you know, if if there are, you know, very frequent test runs against particular projects, you're obviously going to get a lot of Docker pulse that can skewed the metrics release binary downloads, probably the same idea. And the next one was, I think idea from David like we could tag resources that announce the usage of a particular project and have that be tallied in some fashion. Publications particular project generates. Do we know if if it's practical to collect this, and we want to create some sort of channel for our customers and developers to tell us the projects they're using. Maybe we can organize some campaign with David's department to to encourage, you know, the encourage the world to tell high pleasure, what they are using ideas like this. Any thoughts. I mean, I think we can tap into this could be an opportunity to work with the SIGs, for example, you know, we can just ask them if we give them a format for how to as you say tag things. I mean, we can try to encourage them to do that. I know that there's a lot of discussions about what's getting used for what, you know, solution and in the SIGs level so I think that's where some of the data lives and then we have the case study, the library to where that information is available so those are the thoughts I had about where we can get this information. I think we also have to acknowledge that some of the most interesting things we'll never find out about is one of the reasons I really liked the hackathons that happened around the world because often someone would show up who had not been on our radar at all in any other place and they were doing something interesting, but they wouldn't have talked to us if they didn't have been kind of a local opportunity to do so. So I mean, we should make sure when we add this metric we acknowledge this is what we know about it's not happening. Yeah. Yeah, I think for the most part this is pulling information from from the outside world. Unless we create some sort of channel and encourage people to do it, we probably will never find out. Okay, so it sounds like the, the, the action here is, I guess, I'll work with with David and Tracy and hard and see if there are things we could do here. Maybe if we if we can collect this sort of information that's it's great marketing material for the high pleasure brand. All right, cool. I wonder if the certified vendors right. It might be an interesting resource to see what customers from their point of view are using which technologies. Maybe we can reach out to ask them. Okay, okay. There is a list. There's a distribution list. Daniel, you're not using the correct microphone. Oh, sorry, can you hear me now. Yeah. All right, there is a list for the HCS P's but the HCS P's are just for hyperledger fabric right now. But many of them do work cross projects. Cool. And it'd be okay for us to contact them to to ask these sorts of questions right, Daniel. Yeah, yeah, we are awesome. That's great. So let's move on to the next one. By the way, anybody is welcome to to make additional edits after the call. So we only have four minutes left. Thank you for your question readiness. We want to see if it's already past one dot zero. What's the test coverage of the code base. Do they publish performance and real reliability testing data is the user documentation of high quality. Anything else for this part. Bobby. I just wanted to supply if we run out of time that the learning materials working group is currently creating a badging system for documentation to move from incubation to graduated status you must have all the pieces. And we're working on that to present to the TSC in the near future. So we would love to have a sort of a standard view to the quality of documentation that it's very important. Thanks. Thanks for the working group for doing good for doing that. And, and Jim, just because Daniel's not here. He had last. I say last season, the last term for the TSC suggested a badging proposal but it never made progress. We might want to go revisit that as we look at matching it up with what we're doing here with the project health. We'd love to see if there's any additional sort of items that were included in that badging proposal that we screened here. Yeah, yeah, I'd be all for helping with that, because I know from our own experience with our fly. The team can definitely use help on what's the best way to write effective documentation right so this will be also very valuable to the to the projects. Because as developers we know how to write good code not necessarily write documentation, and it's, it can make a huge difference. Awesome. So sorry, this is taking longer than expected so I think we have to stop now. I'll try to organize a off cycle. We're beginning to continue to finish those metrics. And again, everybody is welcome to, to do more edits and add your thoughts there. Thanks. Thanks. Thanks a lot for all the great discussions and back to you to see. All right. Thanks Jim. Yeah, so, as Jim mentioned, feel free to edit this document add additional items or commentary to it. We'll be capturing it under the task force section. There's the project health task force and he's got some meeting notes so feel free to edit that and get your comments available. And with that, I'm going to close the meeting for today. Thank you all for joining and we will see you again next week. Thank you. See everybody.