 So welcome to the April 21st hyperlider technical steering committee call. As you are all aware, few things we must buy by the first one is the antitrust policy notice that is currently displayed on the screen. And the second is our code of conduct. So, for our agenda today we have our standard announcements. The Dev Weekly developer newsletter goes out each Friday. If you'd like to have something added to that newsletter, please leave a comment on the wiki page that is linked in our agenda. And the second announcement is the Hyperledger Global Forum is still accepting CFPs for just over a week more. And I did see yesterday, I think it was yesterday an announcement come out that there are also an extra day of workshops that are open. And so if you'd like to submit CFP for a workshop, I believe those are now available on the CFP site as well. And then the heart is the call for program committee volunteers still open. Sorry, clicking the unmute button. Yes, we are looking for a couple more volunteers. So particularly if anyone on the TSC is interested in joining several people already have. Please feel free to reach out. All right, thanks Hart. Any other announcements that anybody has to make today. All right, seeing no hands, seeing nobody coming off mute, I will take that as we know. So we've got the quarterly reports I did see the Hyperledger fabric report come in either last night or this morning, don't know which, but it was in my inbox this morning. So Dave, thank you for that. I've seen a few people have already had a chance to look at it, but we'll make sure that's kept on the agenda for next week for everybody to have the opportunity to review the fabric report. Next week is the Hyperledger sawtooth report that's due and of course we always have the calendar out there for people to take a look at. So with that, I do have one item that I'd like to discuss from the TSC business this week around the TSC responsibilities, the issue that we reviewed last week in our TSC meeting. And I believe we've got about six or seven sort of checkmarks already in the, in the issue itself as far as approvals. I did see one change, Dave I think that you suggested that came in there. Just add an extra S to one of the words but I think that's the only specific change that I've seen requested. We're thinking maybe we could have a vote on this to see if we would like to formally approve this in the TSC meeting. But if anybody has any sort of discussion here that we should have before that, or we don't think we're ready yet. So now is the time to speak up. I guess there is one more item to be added point number nine. And that's not added it. In one of the comments I saw that. So, I think you just about one. So do we want to add implicitly the subscribing to the TSC list. I need to do put that in with the, with the S missing that Dave spotted. As I pointed out, it has to be in the second list so it wouldn't be number nine because that's the charters list, but that's the detail. Yeah, makes sense. Okay, any other comments or concerns. So I guess I just thought of one. And along the same lines. What about joining the GitHub org. Is that a requirement. I guess they, I guess if they want to be informed of the issues that are coming up on the TSC hyperledger slash TSC repo, then that would probably be a requirement as well. I'm there are a couple that aren't but it's, we can address that later I think. Okay. Yeah, I mean as I said before, I mean we can always expand on it and iterate. It's not the end of the world. That's right. Yeah, if we come up with other things that we want to add to this after the fact. I suspect these kind of small things are not controversial by the means of just like, you know, so we can just casually agree through pull requests and on small additions. Completely agree on. Okay, any other questions or comments. Any objections to us voting I guess it would be where we're at this point. I mean we're talking about approving this pull request with the two additional changes which is adding this requirement about being on the TSC mailing list and the GitHub org. And adding the missing s. It's funny after all, you know, so many people looking at it is like we still miss it and it's like, oh wait. But it's good. Yeah. So I'm happy to approve this of course. All right, so shall we take that as a motion or no. Do we have a second. Second. All right. Thanks Nathan. If you want to lead us to vote, it doesn't have to be roll call vote but just the. Yeah. All those who wish to abstain. Say abstain or some. All those in. Who wish for the motion not to pass. Please say nay. And all in favor say aye aye captain. Motion passes. Thank you. Thank you. Thanks, Ray. And thanks for putting that together for us and for all of the input that we had from the TSC on that particular issue. So as far as I know, we don't have any, any additional. TSC business this week, but before we get to the task force discussion. Is there any TSC business that we should be talking about that I did not know about or did that include in our agenda. I did put this issue on and then Dano. File to pull request. I would just like the TSC to, you know, read this over and add comments on Dano's pull request. We don't have to discuss it in this meeting. Okay. Thanks for that. I will. I think we have a little bit of a review. Is that pull request 33? I think it was. Yeah, 33. All right. So yeah, pull request 33 on the TSC repo. We'll make sure we include that on the discussion items for next week. But feel free to have a look at that before we get to next week. Any other TSC business that anybody would like to bring up at this point. All right. See nobody coming off mute or raising their hands. I will take that as a no. Jim. I guess it is up to you to walk us through the task force discussion for project help dashboards. Sounds good. Thanks, Tracy. Let me start sharing. Okay. So I created the task force page and there will be a page for today's minutes. But I guess just as a refresher for everyone for the background of this. This thread. We were talking about project reports in early in January. And there was a follow up action to. To think about what kind of data can we pull pull together. To generate some sort of. Report for every project. Whenever the TSC needs to make decisions on the health status of a project. This could be a status change on the project from incubation to active or vice vice versa. Or for laps to to go into incubation. So I think that if we have the right types of data, then a lot of decisions can be made much easier and faster. So there's already pretty good sources of such data, such as the insight. By Linux foundation. And there's also previous efforts in particular made by Tracy and others. To generate reports based on GitHub data. So this effort is to kind of pull together all the, all the thinking and get this. Get this. Take it to the, to the finish line basically. So we can have these be hosted somewhere. And there's something that can be embedded in every report just by default. So every, every report will have this be automatically generated from all the data sources. So that's the, that's the thinking behind this effort. And there's a running issue in GitHub that has got contributions from many people. Thanks for everybody who's participated. Thank you so much. Thank you so much. Thank you. And then the result of the discussion we've got so far. Is captured. Here. On this page. But I want to start with the high level goals. So we can talk about it and agree. And then we can get into. The details. We can also present rich set of data. About a project can be both top level projects and labs. And enable data driven decision making at TSC. Specific tasks to be completed. Number one, we need to define the types of data that makes sense. This has been what. The participants on this effort have been focusing on. The types of data that would help with the decision making. Is it on code? Is it on a person? Is it on activity? That sort of thing. And then we should decide how to collect that kind of thing. Given the data we believe are useful. We're to find it how to collect them. And based on the discussions we've got so far, some of them. Will be available programmatically. But others we may have to resort to manual. Contribution manual collection. For the list of deliverables. For this task force. I think we should. Document the decisions. In the GitHub issue. Or maybe there's other better format. To document. What we decided on. And we should discuss with the Linux foundation insight. IT team. About enhancing the dashboard and the gathering of the data. And generating reports. For the data. Such capabilities are missing today. There's a lot in there, but some key capabilities are still missing. And for the. If we have data that has to be collected manually. Then we need to publish some sort of recommended practices. And we can collect them. I like to think we could accomplish everything in three months. If that's not. If that turns out to be too optimistic. Probably. Six months. At most. And we should time box it. So. Let's let's. Let's look at what we've got so far. So this is just capturing the input from the previous discussions in the. In the, in the issue. So. I can go through these items and, but everybody, please feel free to jump in. On any of these points. And we can discuss more. So. We put this in edit mode. Okay. So the first one, first category is community. I think that's, that's no brainer that it's a key success. To the, to any projects is a diverse. Active community. That cares about the code base. So we've got five subcategories and underneath it. Growth. Is the community growing diversity. Is it looked after by a single organization. Or does it have a diverse source of contributions. Which tend to make the project more viable in the long run. Retention is for contributors. Do they feel their input are, are being respected being valued and their contributions are getting to the main branch and getting to customers hence. So they're delivering value. I'll skip maturity to the end because I was having trouble. Articulate articulating this. So maybe someone else can help with this one. Responsiveness is kind of. Tied to allow the others is how responsive is the current contributor group. To, to potential contributors. Maybe they reached out on this court or through a PR. Are they, are they given. Attentions are due diligence being given to their, to their contributions. So. Maturity, I, I feel like this came from David. I don't know if David is on. Yeah, I'm here. Can speak to this. Yeah, go ahead. So basically I, my final maturity was it gives context to the other stats there. So for example, if you're a brand new project to have only been around for a month or two. Your stats are very likely going to look different from somebody who's perhaps, you know, been around for project that's been around for several years and has a different. The community is going to look different, right? I mean, I think. For example, that new project. Probably won't have the same sort of organizational diversity. So it helps give context, right? I think something that might be a red flag. If you're a more mature project, for example, if you, if you didn't have organizational diversity after three years, for example, that might be a red flag. Whereas if you don't have organizational diversity and you just started, that's not a flag, right? I mean, it's natural for a new project that just came from one organization, for example, to not have that diversity yet. So I think it's just a way to help set context for those other projects. And I meant diversity here or maturity here in the sense of. You could measure it simply by age, for example, or the stage, you know, an incubated project is going to be less mature in this sense than a graduated project. So that's all I meant by that. Okay, gotcha. That makes sense. So I guess this means maturity is more of a more of a qualifier rather than a category by itself. Well, Jim, I would, I would add specifically, maybe a couple of bullets here, one around when was the first code commit. And then maybe another around frequency of releases. Yeah. Okay. Because I think a more mature project is, well, I think when you're, when you're, like if you look at fabric, right, I think they have quarterly releases. That's a pretty mature project. And so, you know, you know, get to a place where they can plan for releases every three months and successfully do that. Versus, you know, maybe a newer project hasn't quite figured out when they should do releases or how often or how they manage that whole process. So I think, you know, that's the sort of thing that I think about when I say frequency of releases. Okay. Gotcha. Okay. So I think that they are able to stick to the published release schedule that reflects the maturity. Yeah. Yeah, definitely make sense when you are working on still working on like core foundation of architecture. You could easily step into unforeseen complexities and previously published schedule will be blown out of the water. So if it's a mature code base and it's just adding handsman, then it's much easier to stick to the schedule. Yeah, that makes sense. Any other comments on, on these aspects on community itself. Are we missing any important aspects? As far as the health indicators of the community itself. Okay. Yeah. We can always revisit this later. So I'll continue. For the code aspect. That's obviously another important. Dimension for project is the code itself. And so far we've got two categories. One we call usefulness. Which is how are the general community reacting to this project's functionality. It can be really cool. Can have a lot of coding there, but if people are not using it, it's indicated that it's either way ahead of its time. Or maybe there's a better solution somewhere else. So. Usefulness is our, is this the right solution for the right problem. Is the, is the problem that's being addressed. A real problem. How many people are struggling with it. So that's, that's the, that's the usefulness aspect. Notice that we didn't really capture. Some sort of. Popular aspect for code. How many lines of code is in the project. How active is the code being pushed. I don't think it's. A necessary good indicator to the health status of a project, because you can be working on something, you know, just among a few people that pushes ton of code and, you know, there are 10 PRs every day. But if, if you are the ones only, you are the only ones who care, who care about this code base for a long time. This is not very useful to the general, general community. Okay. Jim, I, on that point, I do think the commit rate is important and how it changes over time for a project. I don't think that's captured anywhere here. Commit rate. How many lines of code are being pushed in. Or number of commits per month or something like that. And then you can look at that over time. Like in the fabric reports, I always put a commit for the quarter number and then I compare it to the previous year so we can kind of get a feel for, is it less active or more active? Gotcha. I guess that's, it feels like a different dimension. Because your fabric, the usefulness is out of the question. So that's, that being established first, I think things like commit rates is a useful indicator, right? Yeah, I agree. It's probably not under the usefulness category, but I think somewhere we should capture it. Yeah. We need something placeholder. Think of something to call this. It could be like base metrics or something because it's kind of a fundamental metric. Yeah. Well, maybe to say something more about that. The production readiness and the usefulness is really trying to measure the growth of the project. This metric seems to be monitoring for when a project might cool down or when a project might be starting to become less interesting for whatever reason. So there's probably a few other statistics we could put in here that might be a little bit different. But I think that's a good point. Yeah. I think that's a good point. And Jim Hart has his hand up. I don't know if you're able to. Sorry. Yeah. I wasn't able to. Okay. I'll make sure to keep track of that. Yeah. Thank you. I appreciate it. Yeah. Hard to go ahead. Hey, Jim. So I like this. One thing that I don't see mentioned here that I think is important and I don't know where we want to put it is documentation. I don't know whether that goes under code community or what. But I think it's an important thing. And we should have some. It should be included in this document. I agree. How many people can get their hands on how easy or difficult it is. That's a. It's pretty critical. I guess this would be more of a manual or subjective. Measurement. Right. The quality of the dogs. I guess we're mainly referring to the quality of the dogs. Right. Well, sure. But just the existence of documentation too on various different things, right? Okay. So yeah, both. Yeah. Okay. That makes sense. And Bobby has her hand up as well. I'd like to, I'd like to help you out with that piece because we're working. We have a task force in the learning materials development working group right now to create some kind of badging that will fit into the existing structure. So we'll talk later, but I would like to be a part of that piece because our task force will be reporting on that shortly. I appreciate that. So we can talk in more details how to sort of wrap these aspects into a into a badging criteria. And have it be captured in the more concise manner. That's great. So we haven't talked about this, this part. I believe this was brought up by heart during the discussion. Production readiness. It's kind of related to the usefulness, which would naturally lead to this being used in production and the team working on making it production ready. So obvious indicators include has it been a 1.0 release? What's the test coverage performance reliability tests? And I guess documentation is somewhat captured here. Yeah. So this would apply more to the top level projects, but even for labs, a good documentation would be a pretty big factor to be considered when this lab is brought up for proposal to become incubation. Okay. So I guess overall, do we feel pretty good about the aspects on the code aspect? Anything else we should consider? So, Jim, I don't know what is the best category to put this in, but you probably might consider something that is related to reflects the innovation around the project. So for example, the amount of, you know, research citations and research publication that is around the area. The technology that the project has been developing is producing. So it's not directly the code, but it's, you know, we are working in a blockchain area that is kind of, you know, everything is in a distributed system, cryptography, consensus, and you know, it's supposed to be quite a lot of research work around this effort. So I think that this indicator might reflect the innovation around the project and the direction the project is heading towards. So do you think it makes sense it falls under usefulness? Maybe this is more a research effort. So even though it's not being picked up by regular users, but it generates research publications and it's being cited in other people's work. Feels like this belongs under usefulness. So it might reflect usefulness, but I think that, you know, it might, it might be worth to basically devote a completely separate section that, you know, stands for innovation because you know, a healthy project, in my opinion, has to innovate once in a while. And, you know, if you will be able to capture the level of innovation that brought up by the project might be, you know, might be good. Yeah, I think this is a good topic. We can measure it, you know, reasonably well by looking at like E print or archive for all of the other like research publication websites. This measures, you know, academic interest in something and it will let us know sort of like, you know, what people who are interested in the most cutting edge technology that we have in the most cutting edge technology are, you know, are following great. So, you know, this, this is definitely something, it's something I have sort of done myself, actually about a bunch of these projects. And I think it's a good metric to measure as Artem put it sort of how cutting edge something is. Yeah, okay. That makes sense. I'm referring to the AR XIV. Yeah, that's what I was trying to spell. No worries. I only managed the three letters of that. Nathan has this end up as well. Yeah, go ahead. So the classic quote is you manage what you measure. And so I like how we've kind of separated some categories here of the different numbers and what we think that they're useful for because one of the dangers here is if we put the wrong numbers in front of the wrong audiences, they'll try to optimize for things that might not be healthy. For example, it would be, it would probably be a bad thing if they tried to maximize their e print. AR XIV numbers by themselves. Or if they tried to increase a number at the expense of the stability of their community. So I wonder if once we finish kind of this, all the things we might want to look at, if it would be good for us to split this kind of dashboard of statistics into more than one category, because some of these things are things we would want to put in front of maintainers and say you should optimize for these, but some of them are not because they could lead to optimizing the wrong things for sustainability. That makes sense. You're saying if a project is not really driven for innovation, they shouldn't be staring at the starboard dashboard where this category is always zero. Well, or likewise, an early stage project might really need to be focused on how people are contributing and how often are they contributing, but a stable project might need to say we need to get people to bundle their commits better because the thing we need to optimize for is maintainer burden as opposed to onboarding more people. So it feels like we need to give some thought to the life cycle of the project and which statistics are most important when, because if we can help the maintainers focus on the right things, it will make the job a lot easier if we focus on everything, it will make the job a lot harder. Yeah. Do you think this captured your concern here? Sure. Okay. Cool. Appreciate that. I will point out that this document is in edit mode. So like other TSC members should feel free to jump in. Yep. Definitely. Thanks for that reminder. Okay. Let me go ahead and date. Cool. I just meant in terms of, you know, we do, we, multiple people can edit at the same time. Yeah. So other people wanted to jump in here and help edit. So I guess for today's call, I'd like to focus on making sure these dimensions are as complete as we can manage. And then we can, we can use follow up calls to talk about detailed measures to collect them. So I guess on both categories, are there any other things we should consider or is there a complete new categories where we're missing here? There's a couple of early pipeline interests, things that sometimes are worth looking at. And sometimes they're not like how many stars or forks there are in their repositories. How many people have signed up for the mailing list or how many, how many users are inside of the chat channels where it may not indicate actual activity, but it might indicate that there are folks, we have the opportunity to invite to create that activity. So. Trying to capture. So you're saying there could be a different dashboard that is presented to early, early state projects. Well, when I said that, I said it a lot like, if you remember the statistics page that Rai showed us before, I talked about kind of a new contributors versus mature contributors versus inactive contributors. There's this kind of people who are poking around expressing interest. Kind of part of the contributor pipeline of people who just are, they're signing up for the mailing list and they're lurking or they've starred the repository because they want to watch to see what might be happening next. If we think of the maintainers as kind of managing a pipeline of new contributors that they have the potential to onboard seeing a lot of activity at this stage of the pipeline might mean it's time to invite people to do something or go go in your ticket repository and label a bunch of things as good first bugs. If you see a bunch of people kind of wandering by and starting to stop and stare, it's a chance to invite them to join. Yeah. So that's more of not measuring the health but helping the providing the data to the maintainers so they can better grow the community. So this is not so much helping the TSCs to judge is more to help the maintainers to grow. Yeah. And for me, that's ultimately that's what the TSC is trying to do and measuring as well, isn't it? I guess, yeah, I guess that's correct. Okay, so. And Jim, I added up on the maturity section. I don't know if this is right place for it or it's a good measure maturity, but the number of good first issues. Nathan, when you said that that popped into my head is that maybe we would want to be able to look at as well. Yeah. So we're saying a mature project should have more of these are the other way around. I think that a mature project should have them because it indicates that they're willing to work with others. And I don't know maybe it goes under a different. Maybe it's not maturity, maybe it's something else, but I just feel like it says something about the community if they're willing to bring on new contributors and mentor those contributors through those good first issues. Gotcha. So this is kind of a, is the community is the project friendly to new contributors. Yeah, that's probably a good way of looking at that. So maybe it's a completely different sort of metric under community. And heart has this hand up as well. Yeah, I was just going to suggest exactly your last point there, Tracy. I think this is really important. And I think we could have a whole category on ease of new contributions or sort of new person onboarding. Great. We would like projects to have sort of smooth ways for new people to get involved. So, you know, good first issues is just one of these things, right? There are a lot of other different things that some projects do to help new contributors. So yeah, I think this is worth the whole category. Or do you think this captures that category? Sure. Yeah. Okay. Cool. Thanks. That's great. I was just going to challenge whether that's a different category than responsiveness. I think how this is a, how you respond to new contributors is, is an important metric of responsiveness. And I think one of the things that we often see in one of the why of how we measure time to resolve PRs and time to respond to questions is to make sure that there's not kind of a, a hidden decision-making cabal inside of the project where some folks get their issues responded too quickly and others folks get their issues perpetually ignored. Because that's a good sign that the project is having internal conflict or is likely to fork or have other troubles. And I think these friendliness numbers kind of all fit into that kind of a category of, is everyone committed to working together and responding to one another? Art Tracey, what do you think? Well, you know, responsive how we've, you know, I'm, the way we've defined responsiveness right now is, is more about like, time to communication. You know, if we want to make a more general definition of like community responsiveness, that's fine. You know, I thought it, I thought responsiveness had meant something more specific, but if we want it to be something broader, that's fine too. You know, however we classify these things or incorporate them into categories, you know, it's not as important as it sort of just all being mentioned. So I'm really fine with however we want to present this. How about this? I think maybe there's a, the aspect you guys are trying to capture Tracey Hart is how easy it is for someone to self to get self started to, to sort of know this is something they can, they can get their hands on before even talking to people. This is sort of their first impression kind of thing. Whereas responsiveness is more, once a contributor has done something, after having, you know, evaluated this, then how are they being treated by the, by the maintainer group? Because if you didn't have a good list of these things, it'll be pretty difficult for a contributor to to get started, right? And then responsiveness wouldn't be reflected at all, because there's no communication. I think they go hand in hand. I think that, you know, if somebody's starting a first issue, they're probably going to be asking a lot of questions and then their time to get a response on, say discord is going to impact whether or not they continue working in that issue or not, right? Was that first impression that they got of the community, something that was a good first impression, or it really just turned them off completely and they decided they'd go spend their time where they, you know, felt more welcomed into the community. And sorry, Aruna, I started speaking before I noticed you had your hand up. No worries, Tracy. So I assume like I can speak now. I think since we are on this topic, right, one of the hard things right now for us to measure or even know something's happening or there is, when we know there are contributors and they understand the code base and they would like to proceed with the contributions. However, they're not aligned to the project's roadmap or what project maintainers currently think of, right? So there is definitely conflict of interest among those contributors and how would they like to answer them or where would they like to take it up from there? So I have heard such concerns earlier in a couple of projects, but I don't know. I think that's a bigger can of worms. If we were to discuss that topic and I don't have answers as to how to measure or how can we even make understand that something like this exists internally to those projects. So just to make sure I understand, Aaron, you are saying, do we want to detect the situation where there are conflicting priorities within the maintainer group of a project? Right, this is to prevent either of them from stopping to contribute or even to prevent them from creating a new fork of the project altogether. Okay. Splitting up the community. So have we seen evidence of this happening either now or before? I mean, this is definitely a theoretical possibility, but do we believe this is happening and it should be a concern? Like how important, how high priority do you think this is? So I can answer if something like this happened and to my understanding is I could see something like this happening in the past. Is it still happening? I don't have an answer for that in terms of priorities since we are not seeing it anymore. I mean, we should take it up sometime, but maybe not as a priority at this present. Yeah, I feel like this is more on the case-by-case basis. I can think of, you know, I don't mind, you know, giving this example because I think it's public knowledge. The split between Ethereum and Mainnet and Ethereum Classic, right? That's probably a good example of the community splitting into quite independent code base because of their priorities and philosophies. Sometimes it's not necessarily a good thing. Sometimes maybe it is a good thing to allow the different approaches to become different communities. So I feel like this should be evaluated on the case-by-case basis. Dana. On the Mainnet Classic split, I think it's a bit more nuanced than that, but basically Classics become a downstream distribution of Mainnet. So that's another way that communities might split. One might stop being, you know, the main one and one might become a downstream and just do what they want on top of it because you know the code base is split. All the code for the most part flows from Mainnet work into Classic work. Yeah, I agree. The majority of the enhancement still applies to both code base. Yeah, definitely agree. I mean, I think communities could split for totally benign reasons too that are actually quite good, right? I mean, like, you know, essentially, right? Sort of in the forked off both Aries and Ursa, right? And I think those were both positive forks, right? It was, hey, here's this component. You know, we're using it, but other people want to use it too. You know, people that don't want to use sort of our main thing, why don't we fork it off? So it's easier for them to use, right? I think, you know, those were, you know, I think certainly positive examples of projects splitting off, right? Yeah. Well, in some sense, that's what gets at the thing that's hard to measure here, which is a project needs to be friendly enough to new ideas so that those new ideas can incubate within the project, as opposed to having to create their own new space outside of Hyperledger to explore that, that new or interesting idea, right? Because it's that's the paradox of open sources that the easier you are to fork within your project, the less likely someone will have to make a new venue or a new community in order to explore their idea. And what we're trying to get at here is, is there's that kind of conflict or sheer that makes people want to leave? Or is there that kind of collaboration and friendliness that makes people want to say, I disagree with you, but I want you to watch and see if I'm doing something dangerous at this point. Yeah, definitely agree there. So I put that under friendliness to new contributors or new ideas. Any other big ideas? I guess we have one more minute and then we should wrap up and give the floor back to Tracy. So we probably should talk about the cadence of the call. I guess I'll model the other task forces to have one off cycle call. Probably should self organize to decide on the timing of that through discord. Cool. So I guess, right, you would have to create the channel for us, right? The task force. That, that ability has been delegated to all members with the TSC role. So. Oh, great. Feel free. We'll do. Thanks, right? Awesome. So I'll publish this and then I'll coordinate with those who are interested to continue the discussion on a off cycle call schedule. And then we can go from there. I'll share everybody's input today. And I'll give it back to you, Tracy. All right, thanks, Jim. And thanks for taking us through this. So next week. The things that I know that are great to be on the agenda are. They've just all slipped my mind. They were right there. Another gun. So the pull request from Dano. And then I think we're back to the beginning of the task forces. So. We'll be talking about project families next week. So if you have any additional input on the project families that you haven't added to any of the wiki pages or the word document, please do that. And we'll make sure that we can continue that discussion next week. So with that, I'm going to close the call. So thanks for attending today.