 Okay, excuse me. Welcome everybody to the May 11th Hyperledger Technical Oversight Committee call. As you may be aware, as you join the call, two things that we have to abide by. The first is the antitrust policy. So there are obviously some competing companies on this call. So we need to make sure that we're not participating in any activities that would be prohibited under any of the laws for antitrust and competition. So the second one is our Code of Conduct. Obviously everybody is welcome on the call, but we do need to abide by our Code of Conduct, which is linked in the agenda. And as far as announcements go, we have the standard Dev Weekly developer newsletters that go out each Friday. If you have anything that you would like to reach, the hundreds of Hyperledger developers who subscribe to that newsletter, please do leave a comment in the Wiki page that is linked in the agenda. And I'm not sure who added the second one. Is that you, Rye? Yeah. So for meetings that affect the governance of the project, it has been mandated that we switch later this month to using PCC. So further details will be coming, but just be aware that by the end of May, we will be switching over to PCC to manage the TOC meeting as it is one of the meetings that is part of the governance of the Hyperledger Foundation and all of its projects. That's pretty much it. Rye, for those of us who do not know, what is PCC? That is the Project Control Center. Sorry. So Project Control Center is a thing that the Linux Foundation has developed. The benefit that we will get is attendance will be recorded and recordings will automatically show up where they're supposed to. So I'm told, and you will get a personalized invite so that you, as a member of the TOC, your attendance is automatically recorded. Furthermore, we will have a public link that will not have an individual invite, but will be public. All right. Thanks, Rye. Any other announcements that anybody has that they would like to make? Okay. No other announcements. So then we have the quarterly reports. So there still is the indie areas and on-cred reports that are out there that came in last week. It looks like a good number of us have reviewed those. But the Eroha report did come in this week and didn't see any specific comments coming back on that. I wanted to open it up to see if there's any questions or comments on the Eroha report. Yeah, Victor. So I've resolved the current questions and I'm able to address any if there are. It would be nice if people who actually commented and did a review. Thank you. All right. Thanks for that, Victor. And thanks for doing the updates as requested. Any comments or questions for the Eroha report? No. If you haven't had a chance to look at it yet, please do that. And if you have had a chance to look at it and ask for some changes, please take another look at that and make sure that the changes are as you request it. And also, if you haven't yet had a chance to look at the other reports that are still out there, please do do that. All right. So the next one is the Hyperledger Sawtooth report. I did see a rune that you had reached out to them on the chat channels and got a response back that they are working on it and I think should be done by the end of this week to submit that. Is that correct, Arun? I haven't looked up by now. I will re-follow. Okay. Yeah. So there is at least some comments and work being done on that report. So we should expect to see that here shortly. So hopefully we will have that on the agenda for people to review next week. Also next week, we do have the Bevel report that is coming due. So we will look forward to seeing that one as well. All right. So for discussion items today, we have two discussion items. We have a demo of the Insights version 3 project help dashboard. And we also have the task force discussion. Dave has done some great work to hopefully close that one out. And so we'll take a look at what he's done and work through that. But I would like to at this point hand it off to the Insights team so that they can give us a demo of the B3 project help dashboard. Yeah. Thank you, Tracy. Hi, everyone. Good morning and good evening. This is Chaitan and I'm one of the product managers at the Linux Foundation. And I am the product manager for LFX Insights. So if you're new to LFX Insights, basically it's a dashboard, a product that provides you the health of the Linux Foundation open source project. It has a variety of metrics from various data sources. What we have right now is the Insights version 2. And we are building the next version of Insights which is Insights version 3. So let me quickly share my screen. Yeah. So I'm specifically here today to gather your feedback, basically on the project health summary dashboard that we are building. Obviously the purpose of this conversation is to validate, have your feedback on the wire frame. And that is the intent to be displayed on LFX Insights. The problem that we are trying to solve with this wire frame is, how do I find out if my project is healthy? And the personas whom we are aiming to solve this problem is for project maintainers, project technical leads, OSPOS and executive directors. Please note, these aren't really final designs. What you see is a low fidelity wire frame designed for validation, feedback and subsequent iterations. I'll quickly say some of the questions that we have and we were looking for your perspectives. I would really be grateful for your perspectives on who on your team looks after the project health. How do you monitor the project health today? What according to you comprises the health of an open source project? What KPIs do you first look at the project health? And when do you feel the necessity to find out about the health of your open source project? And what do you do next when you view the health of your open source project? So with these questions in mind, I'll quickly jump over to the wire frame. Again, a disclaimer. It's a very, very early version of a wire frame. It's specifically intended or designed to get your feedback. So that said, here I am. So this is the landing page or this is what we envision to be the landing page. Obviously, when you land here, you have the option to search for your open source project and your recent searches would appear here. You have a whole lot of filters where you can filter the open source projects by alphabetically or by technology sector or by the contributing companies or the foundations under which each of the projects belong to. But the most important thing here are these cards here. What you see here is, again, this is just a wire frame. So what you see here is a card that shows a project, in this case, Kubernetes. It has a description of your project. It shows the various contributors. The first high level metric that you look at is your contributors, commits, pull requests, and a Kokomo metric, which is essentially a calculation that how much does it relatively cost to build an open source project based on the Kokomo model. You have these two tabs here, which shows your project health. Project health comprises of your velocity, productivity and stability. Velocity is set of about 16 metrics that we have, maybe not a scope for today's conversation. I would love to have a conversation with each of you in a one-on-one setting and show you exactly what we mean by velocity. But let's say for the sake of this conversation, velocity is how fast score is being written in your project, how fast things are moving in your project. Productivity is how much is nearly similar. Productivity is a set of about 45 different criteria that says, okay, how much is being done in productivity in your open source community and then stability is how stable is your open source project. So imagine you being the user landing on the open source, landing on the LFX Insights page, where you see this card, you see, okay, project health. You also have this best practices score, which is your, essentially, your open SSF best practices score, or maybe the flow monitor score as well. So my question here is, when you land on Insights or maybe any other tool that you may be using, what do you look for when you look for the health of the open source project? Yes, maybe I'll start and then we can see who else would like to raise their hands on this. But we've done actually quite a bit of work on project health within the Hyperledger Foundation, specifically in the TOC. We started a while ago, I think when I was still with the Lynx Foundation, where we developed a tool within our labs space to look at project reports and the project health. And there's some documentation that we have in that lab about the sorts of things that we thought was interesting. And then last year, Jim led a project health task force, where we basically documented a number of different criteria for looking at project health and what are the sorts of things that we would be interested in taking a look at. So I think we'd be happy to share both of those pieces of information with you. But really, the TOC is definitely interested in making sure that we're on top of where projects are and if there's any sort of help that they're going to need in order to increase their project health, or if there's anything that we just need to be aware of as we look at each of the projects. So Peter, you have your hand up. Yes, thank you, Tracy. Just a quick one for me, the dashboard with these overview metrics looks great. I just wanted to add that in the final version, ideally, you would be able to click through these scores and then you would get a list of those metrics that is comprised of. And then you would also get this specific sort of formula behind those metrics so that if you really want to, you could recalculate them yourself manually. No one's ever going to do that. But just for the sake of transparency of how the metrics come to be exactly, it's very important in my opinion. Got it. Yes, definitely Peter. What the intent is like this is the landing page where you would definitely be able to look at the most important metrics related to the health of your open source project. And when you click on that, it would take you to something like this. Again, this is just a wireframe that you're seeing here that gives you much more in depth on all the criteria and the calculations that go behind what you see on that initial page there actually. So again, this may be not be a scope for today's conversation, but I would love to have a one-on-one conversation with you if that's okay. And I'll walk you through the entire set of wireframes that we have here. Okay. Yeah. Cool. Okay. All right. Yeah. And if I may ask a follow-up question one more, it's like how about the open SSF scorecard, right? Or what is a good score that you would like to look at that? Oh, okay. Is it the open SSF score or the closed monitor scores that you would be interested in seeing when you land on the insights landing page? I would definitely love to have an open SSF score as well. And another really important one that I immediately look at is a score related to the number of open vulnerabilities. Maybe that's already baked into the open SSF score. So maybe we don't need a specific one, but just security itself is, I think it's a really good one to have right up front. Got it. Okay. Okay. Okay. And typically when, you know, at what intervals do you look at the health of your open source project? Is it like monthly, quarterly, or what compels you to look at the health of the open source system and notification? How does that process look like? Well, for me, usually it's the quarterly report that triggers it, that that's an hide and reflect on it, take a look. It's not more frequently than that, because as one of the maintenance of the project, I'm usually just super busy. And I kind of have a feel on how busy we are. And I kind of have a feel on the health based on that, just in terms of how many contributions that we get, how busy I am with reviewing pull requests from contributors, and then how many bugs we have, how many times the CI has been broken, things like that. So outside of the quarterly reports, I usually just sort of have a feel for it, just based on my own experience, because I work on maintaining cacti 100% of the time. Okay. Okay. That's wonderful. Thank you. Thanks, Resi. So, Chetan, I think maybe we will have more questions based on your explanation so far and on the metrics that we are seeing and also some of the expectations that we have been putting together. Now, but I do have a quick couple of questions, right? My first question is, so these quarterly reports, in the earlier version, we kind of struggled to archive them or probably get a sense of, here's the quarterly report, and then this view doesn't change, even though you're like, so one of the things that we want is all the project reports, all the project teams to report this information in their quarterly report so that this can be collected and then seen by others later point in time as well. So it would help if there is an archival burden or if there is a, let's say a screen capture kind of a feature that would allow us to take this and put it in our reports. You mean you would like to have this downloaded and maybe put it in a PowerPoint slide? Is that what you mean for a quarterly review? I mean, sure. We follow GitHub PR for quarterly reviews, but yes, any means of attaching the score there. Sure, got it. Okay, sure. Yes, that's definitely being considered as such. We have the option of, say, for example, here to download these charts as a PNG or maybe as a CSV file, that's something that would that's something you've been interested in. Is that something that you're looking for, actually? Sure, maybe at the collective level, yes, of course, yes, at the collective level as well. Yes, that's what I meant as well. Yeah. Okay, makes sense. So other thing, I think the thing that you were asking also, right, related to how often are we monitoring these projects? So there have been talks on this topic and then how often should we monitor? We want to move towards an approach where some of the things which can be objectively measured, we want to have a real-time visibility into those. And it's also kind of for us to know if a particular project is continuously having, let's say, some attention that is needed, right? So we want to be very active in that space. So create those alerts and this could be through any means, right? We have not explored that yet. But we do have, let's say, the health metrics that we have been capturing. And of course, we can share that on Discord after this meeting. Sure. Those objective metrics, we would like to have alerts for those things if possible. Okay. Yeah, sure. Okay. And how have you historically been capturing these health metrics, right? What tools have you or alternatives you've been using? So I know for the last three years, right? So before moving into LFX Insights, the reports used to be written by the project maintainers. And those information were completely composed by the project maintainers themselves. They used to aggregate information through different sources and they used to put it over there in the report. And after the LFX Insights, the previous version that we have currently, those information were captured from the tool. So there are views that the project team create. And then that particular view link is in the quarterly report. Okay. Okay. Okay. Cool. And one follow-up question to that, sorry, is what action do you take, right? When you say, maybe get an alert, what action would you like to take? Or when you see this report, typically, what are the common actions that you take actually? Right. I'm just thinking how diplomatic I can answer that. But I'll try my best. Or I'll just say, right? For instance, we had one of the projects which we recently voted for End of Life. And the project team suggests that we could have done a better job by moving the project to dormant. And of course, I agree. Like I see why project team says that. And we could have done that better if we had real-time visibility. And if there was an indication or an alert constantly telling us, hey, there is something that needs your attention. And then it would allow us to probably ask the project maintainers that, hey, probably dormancy is the best approach for now. And then once the project was in dormant state, then we could have voted for it to be End of Life. So this prompt, this particular thing could have been purely objective based in the sense if we knew what kind of contributions were going on and then how often the PRs were raised and how long did it take for PRs to be moist. Those could have alerted us regarding this. Okay. Okay. Sure. This is wonderful feedback. Thank you. Thank you. Yeah. Okay. Stephen? For feedback on this landing page, the thing that I'm struggling a bit with is trying to deal with the total statistics that you've got like contributors commits versus the, you know, something like rolling numbers or trends. So something I would consider would be a toggle that sort of lets you see on that first, both the, you know, the total for the project over its entire lifetime plus say a rolling last quarter view that would let someone coming in or even the project sort of see at least a little bit of a trend on it. So that's one thing. Okay. And then the other piece is we're at a bunch of us are up in Vancouver in Canada right now at the open source summit. There's a ton of, or a few sessions anyway that I've attended on actions that maintainers can take that were pretty good. A, just guidance for them, but also actions that can be taken in response to metrics adjustments and things like that. So I think linking this to actions would be a valuable thing. So templates for doing things, ideas for, you know, building up contributors or whatever. I think that's, you know, static content, but bringing the content to be accessible from where the metrics are would be, I think, an interesting approach. Okay. Okay. And help me understand. You said that you would like to have like a rolling view. One is the total view, which let's say for example in this, you see like, you know, 35 K contributors or the lifetime of this project, but you said you love to see a quarterly view as well, right? And how would you use that data quarterly versus maybe So, so my thought is not, oh, that's cool. That project's really big. But if I'm looking at, you know, the COBOL project, you know, it's going to have numbers like this, because it's 50 years old. And what I want to know is how much is COBOL being used in the last year. So that's by that. So, you know, being able to just quickly toggle, you know, have a button near where that contributors commits PRs is to say past year would be, I think, a useful one. Think of it the same way that, you know, when you go look at an open source project, the first thing you do is look at sort of how many forks there are, that's sort of the absolute. And then you look at when was the last commit made? Oh, it was made for, you know, it was made two years ago. Okay, I don't, you know, I don't want to look at that. So it's kind of that two, the first two thing, two things you do when you look at a project. And then containers themselves, these links off to say, okay, how can you build up these numbers? That was the combination of the summary of what my comments were. Got it. Perfect. Perfect. Sure. Thank you. Thank you. Thank you. And Dave. Hi. Could you click over to that best practices tab that you were on before? Yes. Yes. Yes. Yeah. So we currently have a best practices task force going on. So this is our proposal today. But maybe we can do this offline. But I wanted to understand further the criteria like for each of these four areas is this actually part of the open SSF scorecard? Or is this something else like the documentation legal best practices and security scores here? Yeah, sure. So we've, the idea we've mocked up two, two different variations here. One is the best practices code that we have from Clome Monitor. And the other criticality score that you see here, that's the open SSF score. Clome Monitor has a set of criteria. For example, documentation licenses, best practices, security and legal. And within that, they have mentioned what exactly is covered under documentation. What are the criteria that you need to have satisfied for documentation? Likewise, for licenses, whether it's approved licensing, license scanning, then you have certain best practices. So these are the criteria that we are taking from Clome Monitor. But again, we have just mocked it up, which means that we want to know which of these projects prefer, whether it's the Clome Monitor score or whether it's the open SSF score here. Okay, thanks. Yeah, I can look at both of those offline and maybe give you some feedback later. Yes, sure. And I would love to connect with you as well. Yeah, thank you. Thank you. And I see one chat comment, contributed, is contributed diversity a major factor in the stability category? Yes, definitely. It is one of the major factors in the stability category as contributed diversity. Basically, we are trying to calculate how many contributors make up 50% or more of your total contributions. Does that sound right? Does the definition sound right? Is that what you were looking for? Yes, absolutely. Thanks for explaining. I do sort of like a min entropy-based definition of contributor diversity and stability. And that sounds like what you're going for. Yes, I find it quickly short. Yeah, I think I don't have it here right now. But yes, what I'll do is I'll set up a conversation with you and I'll walk you through the exact designs for stability. I think I have it here. I do not know. Let me take a quick look. Oh, cool. I have it here. Yeah, your contributor dependency. So if it says, okay, six contributors are responsible for over 50% of your contributions asked. And the remaining 264 contributors are doing the other half of the contributions. And can you tie these contributors to organizations like we can in the existing insights? That was exactly what I was going to ask. Okay. Yes, I think we can. But I would perhaps not like to confirm it on this call. But that's something which I'll have feedback taken. I'll have to get back to you during the conversation actually. Okay, thanks. As Dave insinuates here, if we have six contributors that are all from the same company that are responsible for 50% of code contributions, that's potentially more dangerous than six contributors that work for six separate companies. Okay. Okay. Okay. Sounds good. Yep. Sure. Sure. Yeah. Sounds good. Yeah. Thanks for the feedback. Yeah. That's very nice. And on that particular one, I think with our project reports, what we were looking at was basically 20% of contributors contribute 80% of the code is what it typically ended up being, which is that 80-20 rule. So I don't know if 50% is the right number there. And so just keep that in mind as you're working on that. Got it. Okay. Sure. Sure, Tracy. Actually, maybe not the first version. Eventually, we have the plan where we would allow users to adjust the weights or we've been discussing about it, I would say. And you could see that in the future versions where you could adjust the weights that you want to apply to any metric or how you calculate the metric. So maybe that could be a possibility in the future. Okay. Great. Jim. Thanks, Tracy. Sorry, I generally apologize if this has been asked before. Do you guys take into account of what we call, what we were having the health data task force, the usefulness of the code. A project can be really active, but if they're not being adopted, then that should be an important indicator. For example, downloads of the release binaries or polls from the Docker registry are those accounted anywhere in here? Not on this landing page as such. Thank you, Jim, for the feedback. Not on this landing page as such, but that is something that we were considering in the stability page, which is your downloads here based on the polls or the downloads as such. In other words, it's not factoring to the overall project health at this moment. Yes. But yes, again, I was keen to gather the feedback and if that's an important metric, then definitely that's something we would love to consider as such. Yes, yeah. Okay, great. So that is what we're trying to look at. Right. Yeah, as Tracy mentioned, we built a matrix of different categories and different data points that can be collected for project health as a result of the project health task force that we're really happy to share with you guys. In fact, last year we shared with the team, might have been someone else, but yeah, we can share that with you guys. Yeah, that would be wonderful, Jim. Yeah, if you could share that report with me. Thank you. Thank you. Thanks. Thank you. Yeah, I was just wondering and maybe the other folks on TLC can comment on the Contributed Diversity should we also take into account the people who are reviewing and approving PRs? I mean, do we need more diversity there as well? I mean, we want to know if the same people are reviewing like most of the PRs, right? Okay. Yes, I think the intent there was to have an ability to select which metrics you're looking at the Contributed Dependency on. So definitely yes, you would have based on pull requests or your commits or issues or contributions which is an aggregation of commits, issues and pull requests as such. Yeah. No, I agree that the code contributions is the main thing to watch out for, but I just wondered if the PR approvals also should be there somewhere. Okay. Okay. Sure. Sure. Okay. I'll be a little mindful of your time if I would thank you for your feedback first of all and thank you for having me on this call, for giving me this opportunity. And if I could set up a one-on-one call with you, if you could just put a plus one on the chat, I would make sure that I'll reach out to you and I'll set up a conversation at a date and time that's most convenient to you actually. Yeah, I think that's all that I have. Chaitan, I think the chat is disabled on Zoom. So maybe all about we give a thumbs up as a emoji or maybe raise our hands. Yes. One of those. Yes. Sure. That would be wonderful. Yes. Sure. Yes. If we could do that, that would be wonderful. Thumbs up on this one. I'll reach out to you. Yeah. Sure. Yeah. All right, folks. Thank you very much. Thank you, Tracy, Peter, Arun, Steven, Jim, Andy, Arama, and everyone for your feedback. Really appreciate it. Thank you for having me here. Thank you. All right. Thank you so much for bringing this to us. And as you can tell, we're obviously very passionate about project health and hyperlidr. So I think you've got a good set of people who can definitely help you with the work that we've done in helping to move this forward. Perfect. Thank you. Thank you. Yep. All right, Dave, I think it's off to you for project best practices. Excellent. Okay. Let me share. Okay. So I think we're close to closing this one out as far as what I was hoping to achieve for this task force. Like I said at the beginning, this was never meant to be a boil the ocean thing. It was meant to be more of a survey of both existing practices and then identifying gaps and trying to plug some of those gaps where we could and deferring to other task forces where we couldn't do that. So I do have two pull requests out there now, one around the project best practices, and then one around the GitHub contribution guide. This was basically it came from fabric that we pointed people to. And we realized that this was more universally applicable to all the projects. I think Peter has kind of risen to our subject matter expert level for Git and GitHub. So I would definitely like Peter's review of this one and anybody else, of course. So like I said, there's two pull requests out there. I don't think we need to actually go into the content here in the meeting because we've already done that for the most part. It's mostly just minor edits on what we've gone over in the meetings. I did have a few specific questions on some of the open items. Let me go to actually I can do it from the conversation page. So I went ahead and tagged a few of you or I thought you could help. One was about the community specification license. I think Hart, you were going to talk to legal in terms of what we want to say about using this. I don't know if you've heard anything back from legal yet or if you need to get with them still. I need to sorry. I still need to talk to legal about this, but I believe that should be fine. Okay, so I've got this out there as a reminder to you. Rye, I tried to capture the branch protection rules, the things that we should the project should configure in those settings in GitHub. So you might want to double check and make sure I captured all the ones that you thought were important. Like I said, we don't have to do that right now, but you can go off and check that afterwards. Let's see, Peter. So I think it was you that was commenting about the preserving the commit hash. Was that correct? Was that you, Peter? Yeah, I think this one will probably sit more applicable to the GitHub contribution guide. I didn't know exactly how to work that into the process. So if you could take a look at this and think about where that makes the most sense, then we can switch that over to the contribution guide. Okay, I can go through it and find a place, a section to put it, and then I can add the paragraph to explain it. Yeah, like I didn't know if it was when you were initially doing the commit or when you're updating a commit. So you can think about the best placement for that. Okay. The only other question that was out there was around the use of GitHub packages for Docker image storage. We were a little bit concerned about the billing for that. I think what we had said last time was GitHub was not enforcing the limits yet, but if they do, we'd be in trouble. So I put this one to write it. I don't know if we have any more insight into whether this might become an issue if we start hosting multiple images there, especially if we start hosting like pre-release images like nightly builds there. So I talked today or yesterday with a JFrog competitor and they had very nice things to say about their ability to do this. So this may end up being something that's published to the startup. I'm going to talk to them. But let's just not worry about that right now. Okay, we can put that in the back burner. I mean, in Fabric, we do use JFrog for our nightly images and that's worked out pretty well. We were hoping to have something more integrated into GitHub. But yeah, me too. The people that I talked to promised all kinds of great integration. That's a research project for me. I would just keep using GitHub to store Docker images that you're using for CI and stuff because of locality and the JFrog issue will be TBD. Okay. All right, so that's it. I think since it's now in a pull request, I think you can all go off on your own time and see if you've got any final words of wisdom to add to the best practices or to the contribution guide. But are there any other comments right now? You might want to pay particular attention to the introduction of both. Make sure I kind of position these correctly. Like I say, this is a survey of existing guidelines and practices and then dive deeper into other areas. So the first few sections are pointers to existing content. And same with the contribution guide. I want to make sure I couch that the right way, saying there's plenty of guides out there. But this one includes kind of the full set end-to-end set of contribution steps, including like the generally accepted practices from Hyperledger projects. So make sure you're good to make sure we're in alignment in terms of don't want to say anything that's not a good practice on another project. So be good to see some insights from other projects as well. Peter, I think specific as feedback just that overall I think it's great and thank you for doing it. Sure thing. This was one of the things as a maintainer that I wanted to get out of the TSC and when I became a TSC member became an obvious thing that I could contribute back. So looking forward to all of your final reviews on these. All right, thanks for that Dave. Appreciate all the hard work that you've put into this and getting us to this stage. I think it's looking great. Any other topics that anybody would like to bring up today or discuss on the meeting today? Victor? Thank you. I actually have a couple of topics. First of all, we have we have requests for comments regarding Eroha project. It's called promote Eroha runtime validator into Eroha runtime executor. So I can post the link in Zoom if somebody could give me a right to do that. It would be nice because I'd like to do that now and I think discussing it at some point would be useful and any comments will be welcome for that part. Yeah, Victor. On that particular topic we do use Discord. We have a TSC channel and specifically a thread for this particular meeting. So if you wouldn't mind putting it in the Discord then people can take a look there. One moment. So let me find it. I will do that in a moment. One second. I did just turn on your ability to talk to co-hosts and co-hosts. So you can send you know, post in... But which category is TSC if you don't mind? It's at the very top. It says technical oversight committee. Thank you very much. So thank you. I will post it right now. This is the first one and any comments on this topic are welcome because I think it's going to influence a lot in how Iroha behaves. There is yet another part. I find it quite important and it would be related to the mentorship project. I have sent a letter to Min Yu and I would like to actually discuss this here if it's possible because I think the process could be improved significantly in the terms of UX. So first of all, most of the mentees are contacted directly at our ticket instead of contacting us at some point. So I think we could somehow reorganize the rules and reorganize the interface so they know how to proceed correctly. Besides that the LFX platform has some limits because we don't really see the information from our mentees. For example, there is one person who will be working on the DSL project and I ensured that this person applied correctly. He did tell me everything went well and right after when I checked the LFX platform, I don't see him as an applicant. This does not allow our team to know how to proceed because you don't see who and when applied. I feel like it's quite important to see these details before everything goes ahead. I know it's not the accepted applicants but I feel like it's going to help a lot. So far, does this sound reasonable? So I think both of those sound reasonable. However, I don't think this is the group that can make any changes to the LFX platform for mentorship. So I think we need to find the right contact for you to suggest these changes. So Raihar, I don't know, is there somebody at the Linux Foundation or is there a, maybe Chetan, do you know who we should contact about the mentorship piece? Yes, certainly I can contact the right persons because I am in regular touch with the team. In fact, I used to do mentorship before and I've just recently transitioned to a team member so I can speak. So we need to change the descriptions of the mentorship program. Is that right? Not only the description, we actually need to change the LFX code in the terms of UX a bit because right now we don't see who actually applied who did not and this makes it unclear who we need to guide and who we don't need to guide to even apply. We don't know how many people applied for each project in the interval before applications start and the Hyperledger actually picked someone so this is a bit concerning. Got it. I'm not able to make a decision on the roadmap for LFX mentorship but Victor, what I could do is can I set up a call with you and he could definitely have some kind of a demo for you on how or what is planned, maybe or what is currently in the feature set for LFX mentorship. I will copy the contact both in this court and I will copy the contact here for your convenience. Please assign a meeting. Yes, I will reach out to you Victor. Yes, I'll reach out to you via email. Alexander, do you have some other comments regarding this situation or not really? Other than the mishap with the email routing, which I've mentioned to right personally and Sean, nothing else. We do have some concerns related to maintainer diversity but we realistically don't see any way of improving it right now and it shouldn't be a concern given that it is actively developed. We have no plans of abandoning it and yeah, not much else from my side. Anything else? Any other topics that anybody would like to bring up today? If not then I will let you go and we will talk again next week. Next week I think we're back to the security. I think we're trying to close that one out. Is that correct Arun? You wanted one more time for talking through the security task force. Yes, Tracy. That won't be long but I request everyone to review the dock and then we could make a motion on the dock if by next week. Okay, sounds great and then we'll go back to Bobby the following week and then I think we've got two new task forces that you guys talked about last week for us to start and so we'll make sure to get those on the agenda as we continue looking forward. So we will again see you next week and have a great week. Thank you, Tracy. Thank you. Thank you. Thank you. Bye.