 All right, welcome everybody to the March 30th hyper ledger technical oversight committee call. As you are all aware, two things that we must abide by on the call. The first is the anti trust policy notice that is currently displayed on the screen. The second is our code of conduct, which is linked in the agenda. For announcements, we have the standard announcement, the hyper ledger dev weekly developer newsletter goes out each Friday. If you have something that you want to include in that newsletter, please do leave a comment on the wiki page that is for the upcoming newsletter that is linked in the agenda. Any other announcements that anybody has, they'd like to make. Okay. This is really David Boswell is doing the yeoman's work on this. And we really need help getting content. So if there's anything cool going on in your project or your community. Please let us know. Thanks, right. Any other announcements that anybody has. No, okay. In the quarterly reports, I did see the final approval come in on firefly this morning. I did merge that into the repo so that one is completed. Thank you everyone for getting that approved so that we could merge it. Thanks Jim for adding the information, even though Jim's not on the call, adding the information that helped us to get that one merge. Calliper report that came in this week. I did see a few people who have reviewed that this morning. If you haven't reviewed that yet, please do go ahead and review that any questions that anybody has on caliper at this point though. No, okay. So for the past two reports, we still have the transact report. That hasn't come in. Peter did volunteer last week to open an issue and a poor request to talk about the fact that the work there is being moved to live sought to. I did not see either of those things, but Peter's not on the call yet so we'll follow up with Peter and see if we can see what the status is for the issue and the poor request. For Hyperledger Ursa. That one also is due. There was a reminder sent to the Ursa folks on the 20th. Arun it's probably worthwhile to send a second reminder and see if we can get them to provide their Q1 report. And then for Basu. It was the 24th day after it was due last Thursday. The maintainers seemed unaware of the fact that they had to do reporting. So there was some chat that happened in the Basu contributors that was asking for what the requirements were. And then asked for the last report that was filed. When I went to look for the last report that was filed it seems we didn't get the Q4 report from Basu either. Most likely because whoever the maintainers are now, we're not aware that they needed to do that and somehow at the end of the year last year we did miss the fact that they didn't send that report in so we're still waiting to see if we can get the Basu report. Hopefully that one will come in. But Arun for that one too it's probably worth sending a second reminder to see if we can get a status update for when we might expect that any comments or questions on the reports. Okay, so then for the next reports are going to be the Q2 reports that are coming starting on April 13. So for the next weeks from now, we have cacti and fabric due as the next reports that should come in. So for discussion items today. We do have discussion item related to the GitHub actions. So Dave you brought this up in the chat has something for us to follow up on wanted to see right if you have an update for us and what the status is there. So, the CA team here at hyper ledger coincidentally has a meeting later today with links foundation board member and get a SVP stormy over at GitHub. I wanted to ask about this. And just as a reminder, I did go out and do some pull requests for cacti to auto cancel stuff to try to make stuff work more seamlessly. I added some provisional runners to from build jet to a couple projects to see if that would uncork things. How much it would cost. Like every PR on cactus. I'm sorry cacti costs about $3 to run their CI chain. So that's pretty expensive. Peter talked about trying to get some support elsewhere within a center. Yeah I know I see the, I see the wow faces down there. So, I don't know if that's going to happen or not. But pending the outcome of the meeting today. I'll definitely have more report back. And I do. I did some testing yesterday. Actually I've been doing testing for weeks. And I found that one of the limitations is that even though jobs are done and have reported their status. They still show up as active and running. And they hold on to runners for well over an hour. So on the one of the cacti test polls that I did the jobs were complete within about four minutes. And I just let one of the runners go until it timed out. And it was well over an hour after the job was done at build jet that that get a release the runner back into the pool. And that may have been part of the queuing problem. I don't know. I welcome all tips and pointers. But that's, that's where we are right now. Stephen. So anything from yesterday would be no good because it was good how it was down right you saw you know about that I assume. Oh yeah. Yeah. So, it would be super interesting if that that same thing is observed on other other days hopefully they have better days and then yesterday was a mess yesterday but yeah. Yeah, okay, I figured you knew that just wanted to call it out. Yeah, but it's, I have it on another org where I've been playing with bill jet for before yesterday. And I was seeing similar behavior from get up I don't know if get up was degraded for a couple days. It could be. I can't roll it out. I've noticed a lot of weirdness around get up since, I don't know, a week ago. Okay. But definitely yesterday was terrible. And what exactly is built yet. They are a built the CI as a service. And they, the whole reason I started looking into them was that they provide arm builders, which was really interesting to me. So this is, you know what I see, you know, for. And doing arm over there. So it's on three repos right now. Cacti has spent $11 in less than 24 hours, non creds spent, you know, almost a dollar. So, it's, it's fast, their whole thing is fast, fast, fast and my testing shows that that's true they have a free tier where you can play with it which is what I did last week to build what I was trying to do. And that was build arm and X a six Docker images for old fabric releases for 2.4 so that we could have a single container for the fabric bits and that took forever and I didn't get anywhere. But that's, I mean, that's the whole reason that I was interested in bill jet is they have fast arm builders. Teemo. Um, yeah, I'm thinking like should we look more at for example for Alan Kretz you say like it cost a dollar or something you said, and I know for the project for each PR, we are building like the rust binaries. For each of the different platforms different architecture so that's like a lot they all run in parallel and then we run tests and all of them and I mean we could also look at changing that like alright test for PR. The run tests are simple we just test like the standard end to end test on for example, Linux and the whole process of all binaries all tests all platforms is something we start doing on a nightly basis or more of like when the runners are not that much, and then we can move more of the heavy load to doing that when the runner users on GitHub is low and that we still have like for each PR we have just like the, the, the biggest coverage of like what is what we want to test is that I mean, we want to look at or is it still like we want to find a solution where we can just like, I really like that we could always just like not worry at all about the runners. And just what you want, but yeah, if it's like if there's a price tag being attached to it we may want to revamp how we test and run CI also a bit. This is part of a longer discussion. Right now I would say don't worry about it for the next couple weeks. We're like I said we're going to talk to stormy this afternoon. We've had conversations about how do we like if you are a project heart you want to talk to this fear, you know, the, the, the spend for projects that have graduated and the projects that are in incubating heart might not be we I know I'm here I'm just trying to find the mute button. All right. Yeah, eventually, like we don't have a historically we've never had a spend policy for, you know, for CI CD stuff. And this is sort of just been you know, it's been very ad hoc. And when things have gotten out of hand, you know, we've sort of, you know, had staff reach out and basically say hey, you know, you know, it's time, you know, you need to, you need to cut this down because this is an untenable amount of spending. But eventually, you know, we know that projects need to spend money on this and we'd like to have some kind of, you know, formal policy on on CI spending. So, yeah, I mean we're open to suggestions on this. If people think they have good ways of making it work. So I think a lot of people have raised hands. So I'll should let them talk. All right, Jen. Yeah, just want to validate one thing I may have heard this incorrectly but did projects have nightly builds running like CI builds for, I guess regression purposes that seems to be overkill. Yeah, we have we don't host any services that are deployed on clouds. It's just code. I don't think a nightly or regularly scheduled CI is warranted. Or maybe I heard it wrong. Stephen, I'm going to skip you for a moment because I think he hasn't answered to that. Yeah, no, no, I wasn't like suggesting we do that to do nightly it's more currently in the repo there's a lot more like there's multiple pull request. Each day, and each of the pull request runs the whole suit of tests which is a gigantic matrix because we have rust binaries for different platforms. And then we have tests for all those different platforms in each platform also has different architectures, and then we also have different rappers in different languages so the matrix of what's being tested is is huge and my thinking was, we could probably reduce the load of a single pull request by 90% if we just test a single platform with a single architecture for each pull request and then have more like non regular like once in a while test that just test the whole matrix still but it was more of like a suggestion to reduce the load by doing it on, for example, a nightly basis. All right, thanks to you, Stephen. Is there any visibility to the metrics of what's of what's being consumed I've never seen a report like the one you showed right of that. Bill jet screen I don't know if GitHub has the same thing but that would seem to be super useful was making visible what what the projects are using. And therefore, under, you know, you know as Timo said, we've never worried about it so we just did stuff. Getting a message out would be helpful to know what what things are being consumed. I agree. So if I look here I see we have six active jobs. And I can see that these are building, but I, there's no historical data. Okay, and I have searched through the. I looked at the audit log to see if there's anything in here about. Builds in terms of get up actions there's nothing in here. There's, there's nothing that I can tell about what, you know what project did what at what time without basically crawling the entire. Yeah. GitHub API for each repo constantly. There's no, you know, loading this screen, you know, getting this data on my own. There's just no historical usage per repo or per anything. So I wish there was. Yeah. Okay. Ultimately, that's what we need, because then we could self police and figure out how to make sure we're, we're, we're being valuable and our use of it since suddenly it's become a they a precious resource. Sure. And that's what we're going to talk about with stormy later today. Marcus. I think this is a super interesting topic. I mean, as mentioned here before, when previously nobody actually was seeing the bill so they just did whatever they wanted to do in the get up action job in order to test whatever needs to be tested. For my own experience, I see that it's sometimes super easy just to test the entire thing for a PR instead of doing some smart thing and I mean just double check if you really need to run the entire test pipeline if you just, I don't know, did an update on documentation for this. So there's absolutely no need to rebuild the code and all this stuff right. I actually I was wondering, so if I mean, all the discussions and recommendations. We can can come up with I mean this could be ideally piped indirectly into the project best practices. And I know that you can limit. See I based on the branch or whatever so you can you can skip stuff. I don't have the answers right now, I promise. Next week, I will try to and I see that Dave in your it wants to share a screen. So if someone forks a repository and essentially develops their pushes to this to his fork, then this is not charged from the hyperledger account right so that only the, let's say the PR which goes then into the main repository, then it will fire the CI. Right. There's something else there which is. I don't have a way to limit it per project. And I've noticed that the way GitHub does it is very greedy. So like, if whatever project gets in, like, if when cacti gets in, and they have 400 jobs in one PR. Once one of them runs. They're going to drain that queue for that specific PR. It's not round robin so that's why we see like when a job or when a PR starts running, it runs to completion but your whole PR might wait forever. Dave. Yeah, so I was taking some notes while we're talking here adding some things to the project best practices. So while we're on this topic I could show I could show this instead of doing it next half hour. But yeah, I do have some ideas in here. Of course, right mentioned the bill jet he's he's trying. In the last week's meeting we talked about these two I think cancel in progress, and then unchecking this require branches to be up to date before merging. Both of these will suppress redundant builds. Like Marcus was saying you can use filters to make sure you're not doing unnecessary runs like you don't need to do code build and test if it's a doc PR that type of thing. And of course we talked to Timo about, you know, you could do some things nightly instead of everyone like the various platform tests and we do some of our scans nightly also. So, like others have said, people have just gotten in the habit of doing whatever they wanted because it was free to them perceived to be free to them. And now that we're having these considerations I think there's a lot of things we could do at least the some of the most egregious uses could be cut down fairly fairly considerably so. All right, any other. Yeah, go ahead, Steven. So very quick comment. One thought on that last one of running on a schedule rather than each pull request. I assume, but I don't know again I'm not, I'm not on this on a daily basis but are there ways to run some GitHub actions locally before to pull requests and then can we make that a best practice that people have guidance on how to run, you know, particular tests. I guess it would be on a per project basis but they can say, you know, make sure you run these tests and for submitting and then we'll run it on a nightly basis but you, you know, get feedback before you. You do your your, your, your commit. I don't know, never mind. You can definitely have them run when you push your change to your, to your branch on your on your own fork and GitHub. You can run them there before opening the PR. Yeah, definitely that's a good idea. And then, and then people could at least say oh okay I ran these things as part of the, the, the pre merged the pull request anyway, good. Any other thoughts on GitHub actions or best practices that they could add here while we're at this point. Peter. I heard it mentioned, but I don't see it. The one where if it's a documentation change only then you don't need to run the build. That's this one. Yeah. And then the other one that I've been thinking about for cacti is also filters, but I would specifically clarify that if you have a mono repo like we do, then you could also have dependency analysis within the packages. And if you change one package out of the 40 packages that we have, and that package doesn't depend on any of the other packages, then you don't need to run the test for any of the other packages just one package. But this is a mono repo. Yeah, I think that's a good idea. I think each project can do their own assessment there. Yeah. Okay, any other thoughts on GitHub actions or recommendations for best practices that could go into this document. Okay. So, then before we do get to the best practices. One thing that did come up. That's not on the agenda, but is something that the TRC will be responsible for approving is we did get a new request for a lab Stuart that came in this week. It came in yesterday. And wanted to bring that up to the to the TOC today to see if we wanted to talk about that today or if we wanted to wait, I know. Nidhi is on the call here who has been suggested as a lab Stuart. So, I didn't know if we wanted to have that conversation today or not. Yeah, hi. Yeah, hi everyone. Yeah, so I'll just briefly talk about myself and why I wish to be allowed to work. Right. So, myself may be saying, like, I have been involved with the Hyperlegic Foundation and associated projects for around four years now. And currently, I majorly contribute to the HLF connector project and the Internet project, which are under the Hyperlegic labs. And in addition to that, if time permits, I try to contribute to bevel as well. And I'm closely following the developments in the interop space and identity space. And the lab Stuart is like a great responsibility and opportunity for me to know any new developments in this area. And I also participate as high level India chapter lead developer advocate lead. So, this role will help me in assisting new projects, which will be proposed to labs and guide them as needed within the Hyperlegic Foundation. I also understand like the POC has that like no barrier for the new Hyperlegic labs project. And I'll try to ensure that the projects have like sufficient resources so that we can make them stronger. So, yeah, that's what I want to say I look forward to your thoughts regarding my nomination. All right, thank you for that. Any, any comments concerns thoughts from the TSE members. I think we should welcome the help. Right. Absolutely. Absolutely. Peter. That plus one agreed. No concerns. So can we can we do a quick vote then. Can we have a motion to be the vote. Motion. Second. Second. All right, thank you. Can we just do a, not a roll call vote, but can we try can we go through and do the approved project, whatever. Yeah. All those who are wish to abstain, please say abstain. All those in favor say aye. Aye. All those against say nay. The motion passes by voice vote. All right, thank you. And he welcome to the lab Stewart role. If you do have any questions, please feel free to reach out to the lab stewards that are already there and we're happy to have you help us out in approving. And commenting on the different lab proposals that do come in. So thank you so much. Thank you so much. I look forward to it. Yeah. All right, great. Thank you. Any other topics before we get to the best practices that we should talk about. No, Peter. Just a quick one. I am still working on the issue for transit documentation. But I just didn't get around to do it, but I will get it done today. Sorry about it being slower. All right, thanks Peter for the new the update on that. Appreciate that. Any other topics before we hand it off to Dave. No, I think the floor is yours Dave. Okay. So in the last meeting where we talked about this, we at least started to look at the security section. And I know there's a task force or two in progress. So I don't think we need to spend time on that. Let the task force deal with that. And same with documentation. I just wanted to give you an update on the documentation task force last week and it looks like there's a lot of good things like common styling guide recommended common publishing platform best practices for creating documentation. So these are all tasks within that task force. So again, I think we can delegate that discussion to the documentation task force, but I'll pause a minute here if anybody has any burning things they want to add at this time to either security or documentation. Holler now. I had a wonderful time creating a new documentation website and we'll have much to add to the documentation template that Tracy created lots of fun. We'll see we'll be here from you Monday at noon so you can catch us up. Last I can't make that meeting but I'll certainly I'll certainly get the documents in. Thank you. I'll go back to security and documentation after a couple more rounds and those task forces but let's move on. Maybe we'll get through the rest of these today or at least close to it. So a lot of these best practices kind of related to project management but there were a few areas that were purely project management I thought we'd call call them out separately. And in fact when I think it was Steven did the PR about the maintainers file. I had some roles and responsibility for maintainers, which triggered my thought that we should, we should highlight these in the best practices doc as well. So, in terms of project management, the ones that are purely project management at least I thought we should have a bullet about the roadmap and also how to handle issues and good hub. So, for the roadmap I suggest a written project roadmap for every project and discuss that, you know, maybe not every project meeting but regularly regularly in project meetings. And then for GitHub issues, I kind of think of these in two ways one is outbound and inbound issues so outbound issues are issues that the project maintainers and project contributors. Make that the things they know about that need to be done in the project. And they should be, you know, clear and they should have good labels like good first issue for things that a new contributor could potentially pick up. Then there's also good hub issues that I consider more like inbound issues like people having an issue with the project, they open an issue those should be reviewed regularly triage comment on commented on and then closed once resolution is achieved. So I know that's just a light touch on project management. I don't want to boil the ocean here but pause here for any other other suggestions on project management. Yeah, so David, I actually have a question I think maybe towards right right do we have a list of standard labels that are used for all of the hyper ledger projects as a default set or is that possible. We do and it is what it is possible. And we are using basically the GitHub defaults. And I can change those it will. So, let me know what you want. I have taken a look at the GitHub defaults they seemed reasonable to me. Thank you. Peter. We took the good first issue label apart into four different pieces in Cancside. And that was popular with first time contributors for Hacktoberfest or something else where we had to do that. We made it the same structure that you get at conferences about tech talks when they have level 100 200 300 400 100 being the absolute beginner 400 is the expert. And the thinking behind this is that there's two types of good first issues. One is a good first issue because you don't need to know the actual code of the project too well to contribute or fix that issue. But the other type of good first issue. Sorry, I didn't say it right so one is when it's a good first issue because you don't need any experience. The type is a good first issue where you still need to understand something really value need to be an expert in it, but it's not related to the project itself. For example, when there's something deeply embedded in one of our containers, it's about the subnets that Docker create on specifically Mac and then someone who's an expert at networking needs to go in there and fix that. We usually make that a good first issue with a level 400, which means that you do need to be an expert in networking but you don't actually need to know much about cacti itself for you to be able to fix that issue and therefore it still is a good first issue but not necessarily in the way that someone fresh out of college would think is a good first issue. Yes, that short sentence sums up very nicely what I was just trying to say thank you. So what are the actual labels. Is it like good first issue dash 100. That's exactly what it is. Yes. The labels match the name of the original label but then dash one hundred two hundred four hundred. Any other thoughts. So, quick comment on the same thing right on the first crime contributors experience. I think that might help at least for the first time contributors is I know all the projects, majorly at least focus on the user user code, I mean user base who tried to access the project and try to use those projects in their use cases. Very few of them have a technical detailed explanations put up in the documentation. It could be helpful if we have those information as well available in any form, it could be the inform form of. Let's say a design document with diagrams within the documentation section, or it could be like the video presentations the thought process behind the current design that the project came up with and why they chose certain design versus design instead of user guide it would be mostly on the design side of the project. So I do have a bullet saying exactly what you're talking about that in addition to user guides if you get to have project developer guides, we already had bullets around like coding guidelines build instructions test instructions but I think you're right design docs and things of that nature would be important as well. Yeah. I don't know you're talking about RFCs right. I mean I think many of the projects have RFCs with a lot of design rationale and diagrams and stuff like that. I agree. I know a few of the projects have come up with RFC, but this started after all the initial design were finalized, the RFCs are being followed for the new features. It will help if we have that standardized across projects. Yeah. And we are going to have I think the recommendation was to have a RFC then a separate report. Yes. Okay. Bobby. One of the one of the onboarding task forces jobs is to organize these user guides and instructions templated or however to get onboarding much easier so that also meets on Monday so I encourage everybody to come out for that call to somebody else want to make any other suggestions around project management before we move on. Okay, let's go to releases so that kind of also goes under project management but for releases we do have a release taxonomy that was created a long time ago. I think it's still valid. It suggests either you following a Semver or Calver scheme for releases. I think most projects are probably using Semver. Along with that kind of goes an overall release strategy release process and branching strategy. It would be good to get that written down in the documentation somewhere. For example, branching strategy can go hand in hand with Semver where you've got one branch per major minor release. And that lets you let's you maintain a release a minor release in isolation while delivering patch releases against that. So that's what we do in fabric and I think some other projects do that as well. It would be good to document a long-term release strategy. Yeah, go ahead. Yeah, our has his hand up and I'm guessing it's related to the release strategy there. It is I yeah thanks Tracy I didn't necessarily want to interrupt, but I think it would be useful if we explicitly call out that projects should document what approval process is required to cut a release. So if you all agree with that, there's been some ambiguity in certain projects in the past. Okay, I noted that here. Thanks Dave. Okay, so some projects have long term support releases so it's important to document with that is if you have one. What the criteria is and what the impacts are to users in terms of the actual release process. It's good to use GitHub actions to actually do that to automate that so you don't miss a step or has some inconsistency. So for example, you can have GitHub actions published artifacts, publish the release notes and so on. And in terms of the artifacts. It's good practice to publish the any binaries that you build, along with the GitHub release. And then I have an open question here about doctor images of some projects, including fabric. We've got images that we maintain on Docker hub I know there's kind of an evolution of things towards GitHub. So GitHub does have GitHub packages which makes a nice place to put images. We'll have to figure out in fabric how we evolved that over time. And then we'll have to go back to users as we do that. But then I did have a question also about with GitHub packages as it doesn't really know about any size limitations so I mean, if every project starts doing this. Are we going to hit a similar thing that we are with the runners where there's only so not only so many so much space that the images can take up for an organization. I'm not aware of any I will ask that question. If it comes up later today. I did go through and disable LFS. We had two projects that we're using LFS for historic reasons. So I went ahead and disabled that that was limited, but I haven't heard of any limits Peter. It might be outdated because it's been at least a couple of years since I looked this up but we had ideas to publish cannery releases for every commit for our npm packages and the limit on npm packages for versions as 1000. We don't have documentation on what happens when you reach 1000. So, yeah, recommendation for now unless this has been changed is to not publish npm packages on every commit because commit 1000 you will likely hit some trouble. And what are those called modules or what are they called an npm world packages. And then I assume there's something for Docker team, but I have not seen documented either. It's probably one of those questions that rarely comes up and then they just don't really have it out front. So just put a link in the chat in discord chat for this meeting. There is limit that reset every month for data transfer but not for storage. So the data transfer resets every month while storage usage is not, and it gives a table that basically provides the limits for storage and data transfer. So, there are definitely some limits there that exist for GitHub packages. I could add that link to the page Dave. Thank you. Okay, other thoughts on release processes and so on. Hey, Dave. For the very first bullet. Would it be worth going to more details when you're considering either beta releases or release candidates seems like every project does things a bit differently. I think it's caught out fairly well in this document we linked to around release taxonomy. I know this is old and a lot of people haven't seen this, but there's a pretty good description of what should be a call to preview versus alpha versus beta. And they're consistent naming between these. Oh, that's great. I haven't seen that before. That's great. Looks like Peter has this. Okay, I wanted to quick more items as recommendations because they're not easy. One is try to have reproducible builds each year. Next year, if I check out the specific commit you tagged your release with, I should be able to reproduce the binaries bit by bit that matches it. And the second is, if you have, like it is recommended to sign. Those commits, and then also signed binaries, but I know this is a lot of effort and I know it's underway some of it this is under way in the security task force or maybe the open as stuff. So I just for now it just put it down as a recommendation. So how would you summarize that in a bullet. Okay, two bullets. One, have reproducible builds. And then there's a website for that called reproducible builds.org or something once I did not just in the search. Yes, I'm putting it on the chat. So I would put that link there as well just so that if someone wants to read a lot they can. And then the second bullet is, sign the commits pertaining to releases and sign the artifacts, if possible. So I think the task force that was proposed talks about artifacts. Does it also consider commits. How do you sign a commit. There's a dash dash GPG dash sign flag on get. And if you have set up your keys, then it will just automatically sign it and it will include. And actually won't. Yeah, so it's a sign in the commit itself. And say that again dash dash what GPG dash sign. Yeah, you have to do an initial setup, basically, and then once it's set up, it's automatic. It actually shows on the on github when it shows up as the green verified label commits. Okay, so we'll get a link to those instructions put it here. Okay, and then the artifacts is kind of it's kind of the broader task force proposal right or no. Sign in the artifacts. Yeah. I don't think that task force is kicked off yet, but it's in queue. And honestly, the trouble with the GPG sign signatures that you have to deal with keys and people are generally not good at handling keys for a long time, me being the first. I'm trying to get that setup I screwed up and lost my key in a matter of very little time. I was embarrassing. I'm on my third key. There you go. I said optional recommended do it if you can but you know, good luck. Okay, good suggestions here anything else for releases. That's the end of so we talked about CI a little bit already in terms of GitHub actions and some things you can do to limit the number of runners. Also just basic recommendations here for the types of checks you might want to do in CI so of course, I think we require the DCO check. Unit tests integration tests and various security scans are also a good idea against you could consider moving some of those tonight Lee. If they are either lower value or expensive to run. There are some ways to get reports on test coverage. I think we found it's, you don't have to do that every PR you can do that, either on demand or nightly when you want to check that out. Of course, it's always good idea to keep your CI pipelines clean and green as I say. A lot of times there will be failures or flakes and if you don't address them quickly then they'll be like a snowball effect where it's hard. It's harder to untangle them if you don't keep up with them as they come. So it's always best to keep it clean and green at all times. We don't always do that but we we try. And then there's also a task force around automated pipelines I forget who proposed this one but I think that one is in queue right I don't think that's started yet. That's correct and I think Steven was the one who suggested about. Okay, so it's good we'll have some people going deeper in this topic but we have a few minutes left from anybody else has ideas around continuous integration best practices. Okay, we only have one more section, we could try this in three minutes. Not. Say that again. I was just thinking through. It's not an idea but rather something that we, we should think towards because I don't have a solution handy. I remember, there was an incident where we had to block people from from spamming across different repositories. This may be directly related to the maintainers experience as well. Instead of just thinking about new users and then the experience for the new people who come in. I think we should also discuss about the current maintainers responsibilities and current maintainers what they go through. And this happened and very recent that I can remember off. I don't have a solution handy for this I don't know a solution that we can propose. But there were incidents where somebody tried to add in malicious code across different repositories and in few of the repositories it was also much. And there is nothing to blame here on the maintainers because they rely on the pipelines, and those were tested fine. And if the building and the tools are saying it is right to maintainers have very little to say about it but those were reverted after some period. This is something we need a solution for and I don't have a solution handy just wanted to bring up the topic. So are you bringing it up in terms of how we propose to block spammers like this or is the proposal more around see I had a catch bad commits that are coming in which which area did you want to take that. I, as I said I don't have a solution handy. I don't know how to solve this problem. But if people have, and maybe they can think through how they can, how they would like to resolve this. I guess my initial thought is, how was that malicious code eventually got discovered and why wasn't, what wasn't that caught first. It seems like it's it falls on the committers who reviewed the code right right as far as I recall this situation. Some of the first PRs weren't necessarily obviously obviously malicious and then as it went they got more obvious. And then we went back and looked at all the PRs that that person had done. And we realized yes they were all trying to do nefarious things. I think it's a pretty elaborate progressive quote unquote attack, I don't know, other than rely on human doing the right intelligence investigations. Not sure if this can be automated, at least for the type of incidents that that sounds like it's happening here. So we can have all this practice guidelines for maintainers or make them be ready of such incidents. So I'll put a bullet here around, or request checks be wary. I mean this is obvious but we'll work on the wording here. Okay, we're out of time. Yeah. Yeah. I mean I think you're up next on the security task force for next week. So we'll look forward to that discussion and everybody have a great week. Okay. Thanks everybody.