 So, like all the meanings for the underhyped ledger, this is, you know, no anti-dressing. Anti-dressing is fine on weekends with your friends, but don't do it here, okay? So, what I wanted to do, my goal here was to provide a form for the TSC to talk to the product team who is developing the LFX dashboard, which was released last week, and I sent some links around for that, and to be able to provide feedback and requirements to Shubra, and for Shubra to tell us about the future direction. So, Shubra, do you want me to hand the screen off to you, or how do you- Yeah, I think it might be helpful if I can give some high-level context, and yeah, if we can share my screen. So, I don't have a very great deck created, but I'll walk you through what we built, right? So, is everybody able to see my screen? Okay. I think you're good. Okay, perfect. Yeah, so what we've been working on, right? Like me and my engineering team has been working on creating this platform, this, I mean, it's a set of a tool chain that we are creating and providing to all Linux foundation and sister foundation projects. And in those, there are different tools in the tool chain. Insights is the one that is more focused on analytics, metric collection, project help. And again, I'll go deep into that. There are other tools in the tool chain that are one focused around security, where we are scanning, creating vulnerability detection reports, license compliance reports and things like that. And again, creating a security bug backlog. There's an easy CLA bot. If your project uses CLA, that's that for, you use that tool for making developers' lives easier. If not, you know, we might be on a DCO. And then there are like, there's an individual dashboard, you know, which is kind of the community profile for every individual in our ecosystem. And then there are a variety of tools in that, like, you know, one for mentorship, one for crowdfunding, you know, hyperledger projects and other projects are successfully already using these. So for insights in particular, I want to, when we build this, I want to skip a few slides here and actually show you what we are doing with insights. So again, I think from a business problem standpoint, like the different, you know, we interviewed a lot of projects and, you know, we got some requirements in a way, saying, hey, you know, what some of the requirements are like, you know, code velocity is slow. We don't really, we have a big community of contributors, but like, are there, you know, clogs in the development pipeline and we don't have an idea of where exactly. Or like we are unable to identify the top contributors from individuals, you know, companies who are the influencers and it's not just code, right? Like it could be different areas of the project ecosystem or like maybe the user adoption is, you know, tapering off or the project is plateauing, but is it the code quality, the lack of awareness? So different problem areas that we were trying to solve. And then we, what we did is we started instrumenting to get insights into, you know, it's a cliche using the word insights again, to get, you know, data from these endpoints, right? So we started with stuff like GitHub, we started with Garrett, we looked at Jenkins, we looked at, you know, Jira, Confluence, started looking at the social channels, your earned media data, started instrumenting Slack. In Hyperledger, for example, we found out you use Rocket Chat. So we instrumented that. We look at where are your email communications happening, groups.io and whatnot. So we started instrumenting those, or maybe sometimes you're on Google groups. So as you started instrumenting those, we found out, like, you know, to be honest, like at the LF, among the projects, there are like, you know, 30, 40, 50 different tools that every project uses. So we instrumented 15 so far and we have relatively good metrics, everything from, you know, pull requests to builds GitHub issues if you're using GitHub, if you're using Jira, then, you know, those would be Jira tickets, your commit counts, the contributing companies, the contributing developers, lines of code that you're adding, dating weekly, which are the repos, you know, how many downloads you're getting, how many people are sending chat messages, email messages, who are the top influencers in all those? And like, what can, and those are metrics. And then based on those metrics, we started to build some analytics around it. So we are just early and I'll show you what we have been building, but I really want to, you know, brainstorm with this team, like what are the other relevant analytics you can get out of it? So based on these metrics that we are sitting on. So anyway, these are more like, you know, you can read about it, but essentially the goal is to get like a full 360 degree view of your project, not just GitHub commits. And the other area we are working on is like to build these contextual people view. So, you know, if we are, you know, a lot of, you know, backlog is, you know, coming on to a few set of maintainers, how do we, you know, identify that? Like how do we avoid maintainer burnout? What does that pipeline really look like, right? So anyway, there are, these are just some of the features that are in the tool, you know, we have affiliation management where, you know, if you're contributing code and you are contributing on behalf of yourself or you are contributing on behalf of a company, you know, you can do that affiliation automatically. Earlier we were doing it manually, but like we looked at other projects like Kernel or Kubernetes, a lot of them were using the Git DM philosophy. So what we did was we added that, but we also added a UI where, you know, individual developers could go in and contributors could go in and set their affiliation. Like, you know, this code is on behalf of myself or my employer or whatever, right? So, and based on those, we create a lot of leaderboards as well. And then, you know, these are some of the sources I listed from a telemetry that we are gathering from multiple data sources. And then we also try to slice these metrics into sort of technical metrics or technical trends and then ecosystem trends, right? So that you start getting that, you know, 360-degree view. So let me, I'm not sure. Are you guys already using or familiar with the dashboards and insights? I've been pushing that quite a bit. So I'm gonna say yes, but if you wanna do like a... Yeah, let me just do a very quick, right? Like, so to get to it, all you need to do is go to insights.lfx.dev. And if you're in here, you can actually look at this. There's gonna load a lot of projects in there. But like if you start, yeah. So the way we have grouped them are like we have 73 groups of projects. And under each project, there could be one to N number of, you know, projects under that project group. So CNCF will be one project group. Hyperledger would be one project group. So if you can search for Hyperledger, you know, and if you go to the Hyperledger group, you'll see all your projects represented there. Now this is based on the best of knowledge what projects exist under Hyperledger. If you are missing some, you need to let us know so that we can turn on the instrumentation. So we create like, you know, each project card has its own sets of metrics, but then we aggregate all of those metrics under a group summary. And in the group summary, this is like if you're looking at Hyperledger holistically. So you can see here, there are like section of technical trends where we focus on, you know, source control, your commits, right? As an example, just keep in mind the first time these dashboards load, there is a cache going on, it's a little slow, but then, you know, after that, it kind of works lightning fast. So here, when you look at just commits, you know, you can filter by, you know, authors, you can filter by company names, you can filter by repositories, you can filter by projects, and this is the affiliation I was talking about. Like you see, you know, when it's like commits percentage by organization, you might have some buckets which are like unknown. What unknown really means that we don't really yet have the affiliation data for that, like we found contributors, but we didn't know which company they were working for. And then, you know, you start seeing these time series plots of, you know, active contributors, commits, commits by organization. So these are like stack ranked plots by each company in your ecosystem. Lines of code changed, you know, who the key authors are, how many recent, like what was the more recent changes come in, you know, and again, broken down by organizations as well, and which are the repos, which are the most active ones, where, which are the projects, which are the most active ones, just in terms of, you know, code commit activity, right? And then obviously, from an organizational commit, there's a lot of data to read through here, but more, you know, just like we have commit data, we also get like, if you're using get it, or if you're using GitHub, we similarly, you know, use, look at all the PRs that are coming in, right? And once you get the PRs, that's when you look at like, okay, some more metrics, like how are the PRs trending over time? How are the pull request status over time, right? How many are closed? How many are open? And again, broken down again by individuals, companies and whatnot, but the more interesting thing to look at is what is the efficiency? So this is where we are now trying to build towards, like, what kind of metrics would be good to track? Is just the lead time to close up here important? Or, you know, like, what's the back to, you know, we have some backlog BMI indexes that we basically referred to a metric set that was developed at our chaos project. This is like another Linux foundation project, and they had defined a lot of specifications. Yeah, but I see that Dano has his hand up. Oh, yeah, okay. I just had an idea for one of the metrics so we can finish and wait for the open, so. Okay, okay, great. So, you know, like in chaos, they actually took initiative and started defining these metrics, like, you know, in the focus areas, diversity areas, evolution areas, risk areas, and each of this has a metric definition. So, you know, like type of contributions, like activity dates, like, you know, contributors, contributor location, right? Organization diversity. So, we basically, based on what we are gathering, we are kind of following this spec till now, but it doesn't really focus on a lot on the analytics. So, that's one area that we kind of, you know, are looking to get requirements. And then obviously, you know, you have similar breakdown in, you know, into JIRA and, you know, your CI CD pipelines, like if you're looking at your, you know, builds, how effective are those builds running? Like, what's the build percentage? How many are passing? How many are failing? You can start looking into, you know, stuff like actual jobs, right? So, like, how many of them did get aborted? Like, how many of them are unstable? So, you, maybe, you know, these are numbers, but might be good to plot these as, you know, kind of derived metrics, like success rate of builds and things like that. And similarly, we have, you know, like build duration. You know, if you are able to say, like, okay, this is the, you know, you draw a center mean and saying, like, you know, this is the ideal build duration. If you go spike beyond that, it should be alerting. Saying, like, hey, we had an anomaly here, right? Because these are like sheer metrics. If you see what I mean, right? And again, similarly, like, you know, we have those container downloads, but then this is interesting one is around the ecosystem. So, you know, in your case, you are using chat as an example, rocket chat. So, like, who are engaging? Who are the people who are kind of engaging, right? What are they talking about? What's trending? So, like, if you want to capture those keywords from an ecosystem perspective, again, these are like inclusive or you can exclude, you know, these are like filler words. So, these ones would be the ones I would exclude, but we have the configuration. Like, if you give us the requirement, we can actually just exclude words not to look for, or actually if you want to look for certain words, you can, right? And then, you know, you look at, like, who are those people who are the most active on rocket chat, which companies they work for? And then, you know, we have similarly, like, all your email conversations, like, you know, if you think about stack overflow, like, your stack overflow is more or less your email distribution list. And these are like, you know, kind of the top topics. These are your most active mailing lists. These are actually recent messages that are happening on that, on those channels. So, we have all these data. We are kind of sitting on a gold mine of data. And I'm not going to every section. You can check this out. But that was kind of a very high level overview, but I really wanted to come to this forum and talk about, like, we heard that you are starting an initiative around, you know, defining some of those key metrics and analytics. So, we are looking to, you know, consume those. And maybe there's an overlap. If there is like net new, then we are looking to add those as well into the product. Because end of the day, these are these tools we are building from LF for consumption for all our 500 plus projects. Okay. So, let me stop my share there and open it up. So, one metric set that I'd be interested to see that I don't see on here for GitHub is I'd like to know who's reviewing PRs, who's approving PRs, and who's actually doing the pull. Because on our repo, only the maintainers can initiate the pull. And every single PR must be reviewed by another maintainer, including the maintainer's own. So, I'd be interested to see which maintainers are pulling more of the load in the reviews in the polls versus which ones aren't. And that's, you know, that's, you know, more of a higher level, maybe towards maintain a burnout type of thing. But if it's one person doing it all, that's something that should be surface-able versus if it's shared, that's something that I would, you know, be interested to see. Got it, absolutely, absolutely. Okay. Yeah, I noted that down. I'm kind of writing as we go. We were trying to like, yeah, not overcome it. Like we were, like for who don't get it, we were looking at, you know, approval rates. There's a dashboard for approvers. Basically, essentially it's the reviewers, right? In terms of get it changed sets. But for GitHub, we have to actually build that. So, yeah, and get it, we have approvals and get it, we don't. So that's definitely something we'll add. Yeah, because how long it takes to stuff reviewed, approved, yeah. I think Arun was first. Oh, sorry, Arun. Hey, I'd like to add on to that point. And I also pointed this out in the document on the badging proposal. So it would help us if you know which organization, so at least if not the individual level data, but if we can know, I mean, for maintainers, which organization is getting involved in the project? To what percentage are they getting involved? Yeah, and I also, I want to point out that we do have Dano put up a proposal here, Shubra, and there is a lot of commentary at the bottom about how do we understand project health? And I know that you rolled out project health the other day. Yes, yes. Actually, if you, I'm sure I think that's something important I could probably show. Give me one second. Yeah, just giving you URL. Yeah, yeah, yeah. Even if you just go to insights.lfx.dev and you see that button there which says compare project health and go ahead and add a few projects. A-R-I-E-S-A-D-E-S, yep. What would be another one? Or some, yeah, you can add more. And again, like, yeah, if you're looking at 12 months. So here's one set of, like, when we build this original comparison chart, we got like the metrics that you're seeing. That set of requirements was given by CNCF, particularly for the Kubernetes project. But I think we don't want to have like one size fits all. So what we are looking is like, you know, if that Confluence page that you have has, you know, these kind of key metrics that you want to look side by side in terms of pens, you know, we'll definitely add them in. And like, maybe like if there is one specific one for hyperledger projects that you really care about. Okay, heart or heart, yeah. Hey, thanks. So I already found these tools really useful, particularly for showing people who are just familiar with the projects, sort of what's going on. The one thing that I think that would be really useful, at least for companies, is if we had something that I'm going to call upper management mode. So if there were like, so ideally for me, there would be sort of a portal that showed all of our contributions across all projects in hyperledger. And, you know, what I want to use this data for is to like sort of say like, look, you know, you want to do X, you know, if you want to do X, we need more resources, you know, stuff like that. So if I can use this kind of data to pitch to like, people that, you know, we need more resources on this project or something like that, that would be really fantastic from my perspective. Got it, got it. So a couple of things so that I understand it correctly. Right, could you pull up like just on the normal projects, not on the comparison ones? Like if you can pull up hyperledger, like any project doesn't matter, 80s or, yeah. Maybe. Yeah, there we go. Okay, so if you look at the source control as an example, or even here, right? Like you're seeing the top 10 companies, but then if you look at that source control, once this thing loads for you, we have a filter today, which is a company view. So if you look at like, let's look at on that list that you have commits by, yeah, you could do it there or you could do it just on the graph also and just hit apply changes. So this is a limited view for every company in the areas they are contributing. Now, but again, this is like per project, even if you do it at a higher level by hyperledger in the project group summary, that gives you that view, but yeah, if you click on the project group summary tab next to all hyperledger projects and you filter, I think if you go to the same view and yeah, you can just save you all. Yeah, and then you can apply the filter. You can even do the filtering from the dropdown, the commits percentage by organization. If you click IBM or like, yeah, you can just search. It works. Yeah, just apply. So this is kind of, see, if you look at Accenture and if you scroll down, these are basically again based on the time range. So this is just last 90 days if you go and click apply for the last year. And if you scroll down all the way, yeah, these are essentially the people who have been contributing from Accenture, at least based on the affiliation that we have. And these are the repositories and these are the projects where Accenture has been contributing, right? Across just for hyperledger. Now, we have this, but again, it's kind of like you have to do the filtering, but I think if I'm understanding your requirement correctly, let's say you were company Acme and you're logging in, you want to look at globally all across LF and could be like any project, any project group, any area. Is that kind of like the view at the company level? Yeah, that would be one particular thing, yes. Okay, okay. So we have, we are building something in the pipeline. I don't want to, okay, if you can share, sorry. Well, this is, so keep in mind that this is a public meeting and the recording will be shared, so. Okay, okay. So that's fine. My main thing is like there's something we are building in that's kind of in our roadmap. I don't think that is like secret anymore. You know, we have already put it on our website, but let me show you like just a snapshot, it's like in design mode, but we are calling this as the organization dashboard. So this will be built like, you know, next year. But if you look at this, like based on a company logging in, we actually are plotting, you know, your memberships, we are looking at all the kind of metrics for every project. So we have something in the works, but like I think it's like super early for us in that, but absolutely we got the requirement. So, you know, when we are kind of a little bit more mature on the company specific view, we definitely like to come back and present it. Awesome, thank you very much. All right. My other question was, I think it was just related to the presentation. And in one of the, I mean, one of the screen, you told that anomaly detection is possible. So I would like to know how can we make that as an actionable item? If we have an anomaly, how can, is there a possibility of alerting through it? Yes, so we don't have alerting today, but that's what we have started working on. So generally like most today, what we have is just a graphical indicator of there's like a pattern, right? It's in the red or whatever, but that's not enough. There's no event triggered based on that. So, and then based on what chaos, like if you look at some of those efficiency charts, right, if you scroll up and just click on efficiency on the GitHub PR efficiency, as an example. Yeah, the third in that, yeah. Yeah, so if you look at this, you see the time to merge. This is kind of a threshold that they defined zero to seven days, you know, let me see if I can just annotate this a little bit. Yeah, exactly that section. So these like the chaos group, they defined what a good range of how many days should be considered a healthy period or a warning or a danger, right? And we use those metric specifications, but again, these are like, think about it, these are hard coded or like this can differ from project to project. Now for an extremely busy project versus a project who does releases like every three months, these numbers could be like quite off. You see what I mean? Like, you know, sometimes, you know, for an extremely busy project where there are like hundreds of pull requests coming in every day, if you wait for like seven days, that means there is an issue, it's not healthy. Like maybe there's like, you know, too few maintainers or like the PRs are not getting reviewed in time and you have like thousands of PRs backed up. But for a project which kind of, you know, makes releases like every quarter, I'd like, you know, PRs come in a bunch or maybe they're like, you know, 200,000 line commits, which again, you can debate like, is it a healthy practice or not? Like generally we would like smaller iterative commits versus like Big Bang, you know, 500,000 lines code commit, but those numbers could vary. So what we are trying to do is instead of those hard, we could work with those hard thresholds. That's not an issue. If you tell us like, okay, these are good thresholds to monitor off, we can set those indexes, but what we really want to do is like turn on, we're looking at some machine learning algos in the terms of like just normalizing the data, right? Like what happens on a day to day and every project can have different cyclic activities. And based on like, you know, what's the predictive cycle for like, you know, you have your ups and downs and seasonal changes. It takes some time to normalize the data. But like our point is like, if the data is out of that normal operating band, then, you know, we create an anomaly. And then anomaly will kind of trigger an alert and that alert will be, you know, if you want that alert to go to a TSE email group or a TOC one or maybe even to the individual developer, we could trigger that. So that's what we have started working on. But so far nobody has, you know, either we say like, okay, should we start with hard alerts, right? Like hard thresholds or should we really, you know, just look at like how is the project doing over time? Well, I think that kind of gets to the core of what I wanted to, or I hope to cover. And I see Arnaud has his hand up. So Arnaud? Yes. Hi guys. I wanted to follow up on Daniel's point on the reviews. I think that's in general something that is not being rewarded enough. For one thing, you know, I've been trying to get maintainers in my project fabric to take it very seriously. And I expect maintainers to almost do more reviews than commits. And I think it's kind of a flaw of the current system that, you know, PR reviews and mergers are not rewarded. I think if I remember correctly, you said for instance, right, that in the, for the TSE election, people who just do a merge are not even counted as contributors, which to me is completely wrong. So I'm all for trying to exhibit more the contribution that is being made through PR reviews. PR reviews, okay. So as, sorry, I have a frog in my throat. So at a higher level, I guess what I want to do, I want to ask Arno and the TSE members, where do you want to go? Because there was a ton of stuff here in the project badging proposal. This meeting is, you know, an introduction for the TSE to product team so that I'm not the unreliable narrator feeding requirements back and forth. How do you want to work together? How can you work together? And like, where do you want this, where do you want this to go? Yeah, for us, like, yeah, good. Okay, David Boswell. Hey, thanks. Sorry about that. Zoom's not letting me raise my hand because I'm so busy as a co-host. Would you be able to pull up the community health dashboard again, just had an observation to share about that. And it builds on what Shubra said earlier. I mean, there really is a goldmine of data here. And I think that's amazing. And all of these data points and metrics are useful. But as far as community health goes, I just wanted to point out that many things that we can measure don't actually tell us anything about health. And so I would want to just talk through maybe another way to get a view into the data. So for example, the number of commits or the number of contributors, you can think of many scenarios where when those numbers go up, it's actually a sign that the community is not healthy. You could see, for example, one scenario, you can imagine that you have a really stable community with a bunch of contributors who make, you know, a number of commits over time and then it becomes toxic. And then all of a sudden the community starts churning through people, you know, people show up, have a negative experience and leave in that scenario. The number of contributors might apparently be going up, but that's certainly not a sign of community health in that situation. And again, you can come up with a bunch of scenarios where the same sort of thing happens for number of commits, you know, number of lines of code, whatever. I mean, I think a better way to approach the community health is to look at things that map back to different attributes and different values of a healthy community. So for example, instead of knowing the number of total contributors, I'd rather know the retention rate of those contributors, right? Are do we have a community where people show up, have a good experience and stay involved? Or do we have a community where people come, have a bad experience and leave? So in that point, it wouldn't, I wouldn't care about the absolute number of contributors, but I would care about the value of having a community where people want to stay involved. So a retention rate would seem more important. You know, I can think of other values of what a healthy community would be, would be, for example, it's welcoming, you know, and I could think of a metric that mapped to that. For example, what is the average review time when somebody shows up and they make an offer to contribute, are they getting a timely response? So I would prefer to have a dashboard that mapped, you know, five or six or seven metrics that map back to values that we say are a part of a healthy community. I could think of another one, you know, I think it's, you know, we, that would be an interesting mapping exercise. You know, I think we say, for example, we're a global community, you know, do we see contributors contributing, you know, really from a global perspective or are there barriers, for example, around time zones that make it hard for somebody, for example, in India to commit, you know, if a community is so locked into meetings, you know, that could be very exclusionary to people who don't, you know, have time zones that mapped those meetings. So just to throw that out there, I think that would be a much more useful lens at a community health dashboard instead of, again, a bunch of absolute numbers that don't necessarily map back to or are hard to distinguish, are these, you know, showing me something that, you know, is an indicator of health or not? Like, would we have a dashboard that had, you know, a retention metric, a welcoming metrics, for example, that did reflect those values of a healthy community? Okay, Arun. So yeah, I was about to add on to that and probably like a request for if there is an option. So this data is, which I see currently on the screen is too much and to consume, right? And for example, many of this may not be relevant if just because number of commits has gone down in the last one year, that may not mean that project is not healthy. So is there an option where we could build a dashboard and I can choose metrics which I am interested in and tracking across all the projects? That way I can also save that dashboard somewhere and say, hey, here is the metric which I would like to go over probably once a week or probably twice a week. To some regular interval and notice the changes at the dashboard level and it's required I'll drill down to each of it. So I wanna slightly expand on that Arun. So I don't know if you saw the bit about the TSC having quarterly reports for the projects, but it's basically what that would be Arun's requirement there would, I think, capture a lot of the work that goes into that, a lot of the reporting. So if you wanted to take a look at our TSC quarterly reports, that would be to have a time a three month chunk for the last year for Project Bezu where you could see a graph of the health metrics would be very nice. Anyway, I see Arun has his hand up. Yeah, I wanted to follow up on David's point. I think he raises some very interesting point, but to me it highlights the danger of over-interpreting the meaning of those numbers because the fact that somebody came, made one contribution and then kind of disappeared does not necessarily mean they had a bad experience, right? They may not be a regular contributor, they use a tool and they are generally satisfied, they just keep using it and then they find a bug and they say, oh, I'm gonna fix this and they come and contribute the fix to that one bug and then they just go away. And it has nothing to do with the fact that they had a bad experience. So I think we have to be very careful and if anything, to me it kind of makes me feel we should have a big disclaimer of all those dashboards and analytic tools about the potential flaw in interpreting the numbers and the graphics you get from this. I think you're absolutely back to Shubert's point about the gold mine. I think we've gone from feast to fan or what's the expression, famine to feast or whatever. Like we had a situation where there weren't enough metrics and now we almost have too many, right? Like I think the challenge with these sorts of dashboards is you have so many data points and it's so easy to draw wrong conclusions. You see a chart that's going up, you think that's great. For example, like I think that's my main takeaway I think from this conversation around a community health dashboard. I mean this community health dashboard as I see it right now has all sorts of data points in it. I mean, whatever the number is, I know it scrolls down even further like 20 plus different data points. And I think it's very unclear what to infer from these data points. So I think that's something for us to look at. Like we want to make sure that any data point on here actually does map back to some sort of value that does actually indicate that there's a community health item happening. Yeah, and absolutely. And that's why I said, right? Like these, the one that this implementation that you see is basically like a CNCF SIG group, basically like wrote up their specs on what these metrics mean. But again, that's like CNCF interpretation, right? It may not necessarily be hyperledges interpretation based on what, how your community function. And that's a good point. I mean, all communities are different but maybe this is what Arun was saying but I would see my vision would be like a much tighter, maybe, you know just I think, I think as I had said maybe even just five or six things that are, you know very clearly correlated that we can make some inferences from. Yeah, no, absolutely. So I think there was one I wanted to touch upon like, right, are you able to go back to one of the dashboards where we have those active contributor boards like any project? Not the health one. Yeah, yeah, yeah. Yeah, so if you look at the community leaderboard tab on the right top, right? Yeah, so this one, again as we talk with different communities we get this kind of unique requirements but I think this is, this dashboard was used by the networking and edge projects for their voting but if you look at it like this is mostly the more active people, right? Like this is not like all-time contributors who are the, basically the time range is like last 90 days and but again, these are again set of metrics but if you start looking into like code or like, you know they included like documentation as part of the voting metrics that they want to look at because not everybody is writing code. There could be people who are filing issues. There are people who are just, you know everyone looking at pull requests people who are just writing docs. So again, that's kind of like what the voting criteria is. So this is another area like in addition to like that top level project health where, you know, obviously when we say active the whole idea was like, what's the retention rate? What's the drop off, right? That's like, does it indicate health or not? But another area I want to do have you think around is like what would be like you definitely use have voting, right? Like what are the kind of key things you want to look at for voting, for example? That would be another set that would be very useful for us. Okay, I'm not sure who had their hand up next. So I'm going to go with Tracy because she's spoken the least on the meeting. Thanks Roy. So I wanted to go back to the health dashboard. I think, you know, Rai you mentioned kind of the quarterly reports that we do within hyper ledger and I think if we look at the health dashboard, right it is comparing projects to other projects. I'd like to see us being able to compare the same project across different time frames. So, you know, Basu in the last quarter in the quarter before that, the quarter before that sort of thing, right? To see trends and what sort of directions that a project is heading in. I think, you know, for figuring out whether something is healthy or not, I think some of the trends might help us in determining, you know, what's going on specifically with inside that project. Got it. So generally do you guys look into a quarter over quarter or are you also looking to like, you know, year over year? Like what's the general practice? I'm going to say that the TOC has primarily been interested in quarter over quarter because there is a requirement that projects do quarterly reports. But I don't know if year over year is also interesting. Hart, you have the floor? Yeah, so I wanted to push back a little against David's point that he only wanted to see a few metrics. You know, I'd like to sort of be able to see as many metrics as possible. And if we can have some, you know, sort of condensation of the metrics, you know, that might be nice too, but from my perspective, just sort of knowledge is power and the more I can see the better. And the examples that people have given of metrics being misleading are sort of really indicative of mixed state of projects, right? So I believe people have been talking about, you know, it's not healthy if, you know, there's a lot of contributor churn, right? If people are adding contributors, but, you know, old contributors are leaving as well. And while I agree that it's not healthy, right? It's not, the project is still doing something right to get more contributors. It's not as if just people are leaving and, you know, and totally just not being replaced. So this is like, you know, this would be sort of indicative of the project is doing something right, but it's also doing something very wrong to me. So the fact that these statistics are sort of conflicting and some are saying the health is good and some are saying the health is bad might also be a useful tool. Okay, Daniel? Sorry, I'm also stocking at the same time. One of the, probably this goes back a couple of maybe his little tangent off of the current topic, but when I was looking at the list of, you know, a number of PRs and a number of lines committed that we are comparing there, we have some within basic people with very drastically different workflows. We have some contributors that love to do the Big Bang and other contributors love to trim it in across a whole week and whatever they commit during the day. And similarly, we have some commits that are very deep and important that are very few lines and some that have some very, very verbose test cases. And those really jump up the numbers. So when we look at some of those numbers, I mean, we do need to take with a grain of salt that those might not reflect, you know, the value of the contribution because the metrics are measuring things that are secondary that aren't the purpose. Now there's like that legendary go fix that they just had a few weeks ago for the security that was literally an off by one error. And it took like weeks to figure out and test to make sure that it wasn't going to have negative impacts elsewhere. You know, that would get buried under some of these, you know, we changed, we added a contract to the Genesis block and now we got to change 65 files. So. So we're down to the last 10 minutes. What I would like to come away from the meeting with is like a path forward for the TSC to work more directly with the product team. So I'm going to ask Schuber or Vasu, like how can we facilitate getting these requirements so that this is a much shorter loop? Yeah, no, absolutely. So usually we have a, you know, a support channel, but in this case, that's not support. We want to do more collaboration here. So you're, if it's possible for us to, like if you like, what would be useful is if your team can collaborate or like hyperledge your TSC members as well as other people in the community. If you can collaborate on that requirements doc that you are writing on Confluence, and if we can, you know, have access to that Confluence page, particularly like Sachin, who is the product manager, couldn't make it. That's why like Vasu is running engineering on my team as it, but if we can get that link, maybe we can use this as the, you know, again, it's not specific just for Besu. If we can convert this at like a doc for the overall, you know, hyperledger. I think this would be one area we would like to come in, comment, and, you know, also get requirements from. And then usually what would be happen is as we are releasing some net new things, we'll like to, you know, have some feedback as we are closing in on the product, but would it be possible for me or Sachin Vasu, one of them to have a regular cadence where, you know, we can join, maybe it doesn't need to be like hijacking the TSC calls, but if you can join like us, you know, the only the folks who are really interested, maybe it's a SIG or whatever, you know, to discuss like the updates, like it could be once a month, could be once every two weeks outside of, you know, just the document. So that's an interesting question because I know that I know that this has come up multiple times about having a task force on community health. That would be something that I know the TSC has the ability to do. And I don't remember who the last champion for that was. So, Dan, you have your hand up. Okay, Arun. Hey, I was, so I will leave this question to everyone to answer collaboratively, but I agree that, I mean, we can in fact invite, if not TSC, we could have a separate SIG created, but I'll leave it up to the TSC meeting where we collaborate and then discuss on what to proceed with. I would like to bring out two other points quickly and understand if that is a possibility through this tool. The other requirement if it would be helpful is just the project dependencies, right? To understand how, so, but I'm not sure about other projects under Linux Foundation back within Hyperledger. We may end up in a scenario where we have few of the projects which are called as library projects. Some of them could be used by other for end case, I mean, end application development. Some of them could be used for some, let's say, coming up with new DLTRs, doing some enhancements, right? So, if that can be captured as well as one of the metric, for example, you may need to go and pull in along with Docker Hub, but just go and pull in from, let's say the cargo repository, the count of dependencies from there. So, that would be one ask and the other ask is, I know, Dan, you answered this question on the Confluence page regarding the release version association with the badges, but I would like to put that question a little differently here. So, with the releases, right? I know within Hyperledger, there is freedom for each project to have, define their own release metrics and do the dot releases, but what lacks is, we really don't know how often that release is going in or what chunk of things are going in for that release. When we want to understand what's happening in a project, we are not even sure if it's just a bug fix which was done after the last release, before the next release is just going in, or is it that projects are really taking that release thing seriously? I'm not sure if I put that. No, I think I'm getting that. Like, Rai, is it possible for you to just unshare? I wanted to show something to our own and the group. So, we have, again, I'm taking notes as well as we speak. So, we have this, under the LFX toolkit, we have this security service where there are additional analytics. But if I look at, let me show you, just to validate your use case, right? If I go to security and if I look at, let's say, look at Jenkins as an example. So, when you talk about dependencies, so you can see here, like, this is the Jenkins project, but if you look at the dependency, again, this is more of a vulnerability view as well as all dependencies. It's essentially the application map of all the upstream that you are using. So, is this what you're referring to in terms of, like, the upstream, like, forget about the vulnerability part, but is this what you're referring into, like, all the upstream packages that that project is using? Not quite right this way, but who else is using, if, for example, if I were the owner of the Java XML stream, so I want to know, like, who others are using this? I see, got it, across different projects, at least, like, the ones we can monitor, right? That's right, yeah. Okay, I got it, okay. And another thing, in terms of releases, so we have this kind of, again, this is on the security side, we can pull this into insights, that's not a problem, but, you know, where we have this language breakdown and we have these releases, but the point is, when we started looking at the releases, we were looking at every repo within which make-ups that makes up that project. Jenkins is insane, it has, like, 1,800 repos, but, like, we look at the main repos, but then, you know, all those repos have, like, specific branch releases. So are you looking at, like, when you talk about releases, are you looking at, like, the project as a whole? But, again, that does make, like, you know, again, it depends on the project's practice, right? Like, do they, that the project does the releases as a holistic one, or do, like, every repo within that or a group of repos cut their own releases? So would you like to see that, like, at repo level or at, like, the entire project level? Do you see what I mean? Yeah, I understand your question. I may need some time to go over it and to understand because the reports which are, which we receive is at the project level. So if I might, I would propose that the TSC, you know, form a task force for stats stuff and for community health and, you know, have a more formal way to, like, feed these requirements in. Yeah. This isn't actually a TSC meeting, even though I think all of you are here. So Arnau? Yeah. No, but absolutely, I agree with you. I was going to follow up on that. I do think, you know, taking this to the whole TSC, even every other week or so with you a bit much, but I think having a task force where people can, you know, interested, can engage, you know, more regularly and then when there is an actual, like achieve, you know, milestone, we can bring it back to the TSC for the whole TSC to look at. I think is a better approach. So I'm all for that approach. So perhaps that would be the first piece of business next year, like, or you could like do that on the mailing list or however you want to handle that. I don't think, I don't think it's a piece of business that needs to be handled in the next 90 seconds is what I'm getting at. No, I agree. Closing thoughts? Yeah, so I think we have, I've gathered a bunch of requirements today and I think on that conference page, Rai, if we can get the link to that, like I'll have my engineers use that as the reference point. But if, you know, the projects can actually put the requirements there in as much possible detail, that would be great for us. Like we'll start treating that conference page as the requirements and, you know, go from there. Yeah, and if you want to Rai set up a dedicated Slack channel from Hyperledger and invite some of us from product engineering, we can also be part of it. Okay, I'll elaborate through that, right? You mean like on the LF Slack? Yeah, either, you know, we can establish a channel on the LF side or on the Hyperledger and you can invite us into that. Gotcha. I don't want to prescribe a solution in the next, you know, a couple of seconds. Just to talk, yeah. Yeah, yeah, yeah. Anyone else? Well, I apologize for the interruption there in the middle and I will get these recordings posted to the TSC space as soon as they're done converting. Shubra? Yeah, no, I think this was great. Thank you. Thank you for your time and your feedback. Really looking to collaborate. Well, likewise. Thank you. Thanks all. Goodbye. Bye.