 Okay, are we are we good to go now. Yes, looks great. Okay. So just a quick note first a big thanks to the rest of the team. David Chang and Dave be. They were great at the whole process and being volunteered for this although special note to David Lopez who actually volunteered for this. Even though he was on the team last time and could have got a free out to a stepped up to the plate force. So the process we went through for developing our plan is we look back at the 2009 and the 2101 release testing. And what they had accomplished the number of tutorials they had done in the 2009. The amount of time they had taken. And the number of items they had worked through release notes and issues. One thing to point out is in the 2009 they actually went in and tried to fix the bugs that they identified. So they didn't get, you notice they only made it through three weeks doing that. So for the 2101 that process was streamlined quite a bit. Where the testers just identified the bugs, opened issues, didn't actually fix things. So even though they only spent approximately two or three days for it as opposed to approximately two weeks. They made it through more tutorials and many more release items. So that's where the process we followed for 2105. Also we looked back at the recommendations from 2101. That we had made it at the 2101 release meeting. That we have more of a structured testing plan. That we open issues even more aggressively. You know, do it right away as opposed to waiting, you know, until after to do them. So there was just one complaint or not complaint, but issue that was raised with the process of 2101. There was just simply too many items in the release notes to cover. In 2101, there's 283 in the 2009. There's 350. So you had slog away at this list and the list just never got any smaller. We didn't really have a good sense of getting anything done. So we tried to address that. It was not easy to identify out of those, you know, 300 and some issues, which are the ones that we should actually be paying attention to, which are the high priority items, which are the low priority items. It wasn't always clear how to test. An issue. So we spent a lot of time 2101 just figuring out what to do. So what may have been duplicated or could have been done by automated testing. Not everything is or should be testable on Maine. Maine might not have the correct tools installed or a test might require admin access, et cetera, et cetera. So for our release 25 or the 2105 release testing plan, the dev team went through the release notes and curated a small list of items that were the high priority items that we could focus on. I also went through all the pull requests and pulled out the pull requests that did have a section on, you know, somebody had checked a little box saying that it required manual testing or there was instructions for manual testing. So out of the 369 pull requests when I generated my release notes, 43 of those actually had instructions included on how to do the tests. And of course the Galaxy training network tutorials for scenario based testing. We followed basically the same testing protocol. When to open an issue, when not to open an issue, what we can ignore and how to verify if a problem is relevant by testing against previous issues. Again, we're not fixing bugs. We're just trying to identify as many as we can in the limited timeframe that we had set aside for this. So a quick summary of our results. We made it through 22 of the tutorials. Only nine of the release note items. But I should, while that doesn't sound like a lot, there were only 10 items pulled out as curated as highly important by the dev team and we made it through all but two of those. So actually, you know, in sense of, you know, being able to feel that we got a lot accomplished, at least we could look at that. And that one spreadsheet we had for tracking was mostly filled out. So that's, you know, makes us feel good. Of course there was two other spreadsheets that were mostly filled out. I mean, I don't know, I don't know, I don't know, I don't know if it's empty, but that's beside the point. We managed to open 19 issues. I still have a speaking back to, I need to open these issues right away. I still have two I have to open, but those were actually found in. One was the singularity. Tutorial. One of the upstream repos now has a bug in it. So they need an issue open. So we actually checked the pull requests to see if that is actually probably been merged. I know Marius had said it was going to be a. Addressed before we. Got there sort of the 22 tutorials. We had done 12 were from the admin eight galaxy interface and then two in the introduction. You know, so we did more of the admin tutorials background stuff. And then we had a bug in it. And then we had a bug in it. And then anything else. Of the release notes that we had covered a former bug fixes, two were new features and three were enhancements. Now those, that's should be taken with a grain of salt because that just goes by the kind label that was assigned. So these are not mutually exclusive. This should probably be a Venn diagram, not a pie chart, but it should be covered. Again, we only covered nine out of the 317. I noticed this is a different number here, probably generating slides and looking at pull requests at different times, which is why we have slightly different numbers. Again, we opened 19 issues. All of these 19 are open in the main galaxy repo. I guess the big takeaway from this release testing, either a release process is getting much better. The testers are not doing as a good job, but we found no real major issues. All little, you know, niggling paper cuts mostly. No major issues found in the tutorials. We did, you know, add a few PRs with typos fixed. Some of the problems who identified, as I said, mostly paper cut in the paper cut issues in the user interface around data libraries and the workflow errors. I think this server error here, number 11, 986, that was the blocker that job handlers were acting as workflow schedulers when they shouldn't be doing that. The tutorials again, no major problems, you know, some outdated screenshots, you know, as things evolve those slowly, you know, the blocker from bit rot sort of related to that is a lot of the changes that you're supposed to work through are expressed as patches or diff files that you're supposed to get to apply. And those almost never apply cleanly. Depending on what order you go through the tutorials, the line number might be wrong. The surrounding context might be different. But I found very few of those patches that actually applied unless I went in and edited them myself. And as I said, the minor typos were fixed on the spot. And then there is one irregularity that I had found with the way the noodles maintained state, but it's a pretty, pretty arcane error that you had to go through and disconnect and reconnect things in a certain order to expose that. So I think this is, well, I shouldn't say all of them. And that's 14 open and 20 closed. The issues that we had found. Yeah. And there's the blocking issue was 11, nine 86 with the job handlers acting as a workflow schedules. So just to recap. No major issues found the release notes. Those were really handy in making sure we laser focused on the. The items that we needed to. And then the automated classification of items based on the. PR template that was used. You know, trying to avoid the ones that manually test, or could be automated tests that tested automatically. Or items that did have instructions for manually testing. But even if they did have instructions for manually testing, it wasn't always particularly for me, it wasn't always clear what they wanted us to test. The PR template helped a lot. But the people may have not filled out all of the steps. And it seems to me that, you know, that was written by domain experts for domain experts. You know, me coming from a non biology background, a lot of the testing steps seem to be, you know, a little bit more specific. So I'm going to go back to the, I'm going to go back to the. Sequence to the human genome. Step two, identify interesting variations. Step three, verify that they're interesting variations. You know, so it would be nice if the instructions where, you know, click this button, go to this dialogue, click this button, make sure this thing turns green. Much more detailed steps on. What needs to be done almost like. This tool do this step. So the instructions could be a little bit more fine grained. So this is sort of, again, a recap of the previous slide that we found. The testing much easier to do with a curated list of issues, but we didn't get through as many of them. Part of that is maybe due to step point number three there. So I think that we are also testing on anvil and GVL instances. And I know for myself, I would usually start a test on anvil or. And or the G, well, I would start on anvil or GVL. If I noticed a problem, then I would test on the other platform, then I would test it on main. Then I would try and find a 2101 instance. Which I'd have to spin up on myself to see if it existed in 2101 or if it was aggression. So just having that many platforms to test on tended to slow things down. A little bit allowed us to focus 100% on testing the release. And similarly with the availability of the virtual machines, both anvil and GVL of course are running on virtual machines. But then we could also spin up specific galaxy instances. When people wanted to do admin testing. If they're running through any of the admin ansible playbooks, that means you can't really be doing that on main. I'm sure Nate doesn't want us installing tools, et cetera, et cetera, et cetera. So we it's nice to have these standalone VMs that we can work from. Some of the burn rates and comparisons of how we compared to past times. So here we can see the release. There were only three people doing testing. I should maybe I'm not too sure. Marius was just listed as coordinator, not test. That should be bumped up to four, but they were at it for almost two weeks of 14 days. From their results, they covered three release notes and 21 tutorials. NPP is release notes per person per day and tutorials per person per day. So, you know, on average, they were only making it through half a tutorial per person per day. 21 oh one, we can see those numbers increase quite a bit. They managed to do release notes, about three release notes per person per day and two and a half of the tutorials. From the 21 oh five, we didn't have as many people. We went to vote as long. We didn't cover as many release notes. And as I said, that's possibly due to us having more. Instances to test against the JVL, GVL, anvil main. The other ones. So we only made it through about one release note per person per day. But our tutorials where we're up a little bit did almost three tutorials per person per day. And this is just, you know, hopefully we can keep track of these numbers. So when it comes to the 21 oh nine, we can look and say, okay, we've got this many issues. We've got this many tutorials. We want to have this many people. So this is how many days we're going to need or we have this many issues. This is how many days we want to spend. So we're going to need this many people. Just some numbers to, you know, make informed database decisions. And then we're going to need to make sure that we have, you know, improvements that we could do better next time. And we already did it this time. We didn't pay any attention to the tool tests from the plan. It was listed in there, but we decided early. That's a waste of time. It's already automated. We can maybe curate the list of tutorials to reduce, reduce a lot of the overlapping stuff. If we go through the rule based uploader tutorial, we can see that there's a number of, there's a number of, there's a number of, uh, uploading tutorials. Same with that. There's a number of on R and similar things. So those could be, uh, perhaps curated, uh, to focus on the big ones. Uh, the use of labels and tags to simplify the classification. Um, uh, how things should be tested. Uh, as it is right now to see if something included instructions for manually testing, if they had, uh, put a little check mark in that little box beside, uh, includes instructions or whatever it was. So it was basically, uh, looking for a string match in the body. Uh, that becomes much easier. If we just stick that in a label. Uh, I know there's been lots of discussion about how, um, Issue should be, uh, labeled. Uh, but if when somebody creates a PR, if they could just tag it in some way, it doesn't really matter what the tags or the labels are, they're just, uh, they're just not, uh, they're just not, uh, as easy to understand as it's, uh, tagged in some way, either when the person creates the, uh, pull request or when the pull request is merged, maybe the person approving the merge can say, Hey, yeah, we need to spend some time on this, um, during the release testing. And of course, uh, you know, just stress in the template, how to, uh, you know, click this button, do this, do that. Um, and screenshots are always handy. Uh, again, reiterating the, uh, tags for PRs that need special attention. Um, and it would be nice. And this is, you know, ongoing discussion always, how much of this can we automate, uh, with Selenium. Um, there's ongoing work to try and, uh, use, uh, galaxy tours for Selenium tests. We need to update the tours because a lot of those are broken currently and they're probably going to break even more. Once the new history comes out, uh, just based on, you know, where you're supposed to click on things. Um, and then, uh, what would have been handy for, uh, this time is a side by side, uh, environment for testing, um, new releases. Uh, when we did find a bug and we wanted to see if it was a disaster recovery plan. Um, you know, if there's a plan, if, uh, something goes wrong with tack, you know, uh, major disaster civil unrest, where Gallic main goes down and can't come back. Uh, what is the disaster recovery plan? And can we get that back? Uh, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, I mean, um, what is the disaster recovery plan and can we get that going a new instance? Well, it seems like a release testing is a good time to test that. We can, you know, test our disaster backup and that gives us the, um, you know, Potentially a previous instance that we can use for, uh, Regression testing. So we can sort of kill, um, or I forget if it's test.galaxyproject.org maybe pin that during the release testing to the 2105 release as well. So we have something in addition to main that we can test against and back to the ideas for improvement. Not only can we automate a lot of the tests we can automate a lot of the release testing process itself, generating release notes. This time, Marius went through manually and generated a lot of the stuff, but we can generate those release notes manually. It's literally a one-liner. It's all granted a long, ugly one-liner, but it's literally a one-liner with the GitHub CLI tool. So if we have some sort of label, either test manual or manual QA, it doesn't matter what the label is, but we can pull out just things with those labels. We can filter on other things as well. The state of the PR, is it open? Is it closed? Has it been merged? We can search on what milestone it has targeted. We can search by what date it was merged or when it was created. In addition, the GitHub PR or the GitHub CLI program can output JSON format and format that JSON with a Go template. So that's what I actually use to generate the spreadsheets that we worked from. I talked to Marius about adding this to the make file so it could become part of the release process, and I was about to do that. And then I realized, well, it requires that you have the GitHub CLI tool installed locally. So I don't know if that's a good candidate for inclusion in the make file. I know there is a GitHub API already in the code that could be used for that, but this allowed me to generate the spreadsheets in a few moments. Maybe we can clarify here, this is not the release node generation, because the release nodes are generated. This is getting a list of issues you may or may not want to look at, because the way the release nodes are automated is they output HTML or no well, markdown. Or RST, I don't remember which one it is, but yeah, that's the idea. Yeah, I should clarify that I'm sort of saying release notes, but that's not really the case when I'm saying release notes it's the list of issues that we have selected for testing. And I put in the spreadsheet so yeah Marius makes a good point that's not actually the, the release notes which are human generated. It does it in RST, a restructured text and also in HTML. But for my purposes I want something that is machine readable and can be processed so they're not really release notes but they're a list of issues. So the other suggestions for improvements is that we start earlier, much earlier, not so much for the testing but at least the process of setting up the tests. You know, we know pretty much basically when the 20109 test is going to happen. So the picking the team should happen soon. At least the coordinator. So they can sort of get up to speed and get things in place so hopefully everything is a button click when things are ready to go. The rest of the team should be, you know, picked before the code freeze happens because what, you know, and I'd really like to thank the my teammates on this. Because I sort of missed, I didn't cut sort of I missed Marius's announcement when the release notes were published. So I didn't notice that for a couple of days after. So I picked the team and sort of a hurry and emailed everybody and got everybody going in short notice so I'd like to thank the team for picking up the balls I kept dropping. So it would be nice to let the team know much in advance so they can clear their calendar ahead out stress that you know the I did send out that list of people to be curated by the PIs. You know so if you get the email saying thank you for volunteering you know you know that your PI is thought it was okay for you to do that. And again, this is another thing that can be automated and that's what I did. I scraped the participant pool from the working group spreadsheet on Google Docs, eliminated people that were either PIs had already participated in the release testing. And then his name Nate got excused. So once I had the list of people I just selected and names at random, and and can be based on the amount of testing to be done as I had said we can, you know we sort of got some burn rates. So if we identify how many issues we need to look at how many tutorials want to look at we can do some math figure out what and should be. And then select the names. And so that is all for me. I'll stop sharing my screen, and we can have some discussion. Thanks Keith. One of the suggestions you mentioned for future potential suggestion for the future is requiring tests for bug fixes and features. We discussed it in 2009 during the 2009 release testing. And the consensus was to put it off until better times because writing tests was hard is hard and there was no documentation. Now we have great guidance on how to write a test and what tests to choose mainly thanks to john. So I'm wondering, should we gently start trying to require tests in these cases like other major repos do, for example, if you fix a bug. It's only natural to have a bug identify a test identifying exposing the bug. What do you think. I think, you know that's a great idea sort of you know test based development. The first, I think, any bug fix should come with a test that exposes the bug. So, you know what you're fixing and then you know when you're done fixing it to because your test passes. But can we reasonably, you know require the community to write those or do we have the time and resources to help in each case when there is a PR, which is good but lacks a test and the person who submitted the fix doesn't know what to write the test. The thing I mean I want to push back on that. So first, I think I mean, we added the template as a sort of small notch towards writing tests. I hate the template. We've had it now for three months, I think it's a waste of time. Mostly, I mean we're getting very few years from the community at this point. And so they were meant mostly for the community. Everybody should be writing good commit notes already. And if you have a good commit note and that messes. I mean the template messes with it right. You need to edit it to go in. So, you know what every test is a good test. Right. So you can have tests that fix a certain behavior that we may want to change for. We have some integrations that are difficult to set up like all the way to see off. So I would say like, you know, as a review, you can always request tests. You can always read the test yourself. You can use a lot of PRs. I think it's good to be able to, you know, if somebody comes up with a bug fix as thank you and merge it and move on, because there's also a ton of code we don't care about. Or like, you know, we had a pull request from somebody was the only person using it. So that's my two cents. I know, I mean, I write the pull request template and you know, I can see how the template itself is nice getting contribution from the outside and maybe it's not exactly obvious how you test them but they're not meant for general public demand for us. So would it be safe to say that it would be great. And really, you know, the preferred way for each PR to come in with a test. I mean, ideally, you would right but it's like you'd be willing to accept things if it's prudent to do so without a test right but I think making, making testing, you know, like a soft requirement like a hard soft soft requirement like really preferred, you know, make sure people are aware that we want to have good testing we want to have 100% testing right like that's what we would love to have but from a practical standpoint maybe it's not possible. There are certain types of PR is it would be good if it were at least a soft requirement, not not all types of PR is, and speaking of the template. First of all, yeah, I'm with Marius. It's annoying. I know how to write a good PR message which includes all those points in my own formatting which is much more readable, I think. I see the point and what we could do if the community considers that we do need a template we can have different types of templates for different types of PRs. I believe SQL Alchemy has that if you are submitting a bug fix you get one template if you are submitting a feature or question that's not a question but a minor change that's a different template. That's one thing but but in terms of requiring tests as a soft requirement if you found the bug will try to write a test if it's some kind of configuration change which does not require an obvious test well then no. If it's a functional test we should require because anybody who can write code should be able to write a functional unit test but if it's an integration test where you have a database you have a server etc. Those are more difficult to set up and test and automate. I think the middle ground would be if you're changing one function write a test to test it. If you're writing the change that affects multiple systems then you know it's okay if you manually test it for now. You mean unit test right? The simple one. Yeah unit and functional test should be mandatory integration test depends I would say. I want to clarify these need to be good right they need to prevent us they should not prevent us from working in the future because we first have to understand whether the unit test or the functional test or whatever is relevant or not and that's a challenge. Well that would be part of the review if somebody submits a unit test which is you know meaningless that shouldn't get merged you know they have to be the fix it remove it or. I mean one problem I face and you know in support of your point I've seen tests that have dependency between them. I mean ideally a test you should be able to run unit test in any order random order, but there are dependencies so if it runs one of her runs before the other you see an issue if it runs in the other direction you don't. If somebody submits a test that depends on some other tests you know obviously you have to say you know you know to break these up so they should be no dependency between two subsequent tests. Absolutely our code base is not perfect and this particular example you bring up has been on the to do list and now on the test working group to do list for both been on the to do list for at least two years I think. Hopefully the tiny spend upfront writing these tests will be saved in someone not having to debug something later on right that's that's the whole goal right. It's not a time wasted in my experience exactly orders of magnitude in the long run. I mean if you start with it sure you come in afterwards I'm not so sure. Yeah we don't want to have to we don't want to scare someone away from fixing a quick bug on some large complicated thing right so because now they feel obligated to write a test or something that should have had tests to begin with but whatever reason do not. I guess that was my question let's say there was like the most trivial bug present you know it's a one character fix. What would be the minimum tests that can kind of get integrated in. Can you can you have an equally simple test or I mean usually the problem with testing is you have to like have this whole harness and a lot of you know infrastructure behind that but are we in a position where you know a really simple test could be almost a one liner in itself. I don't know what the character is something if it's a night the galaxy is code base is not all of it is easily testable so if it's a character wrong character in a simple isolated function which can be run without marking up half of it then sure it's a tiny little unit test. But if that little character wrong character is part of a 300 line or 1000 line method or function which cannot be easily isolated, then it's a very non trivial. Testing case which actually requires not just writing a clever test but refactoring that part of galaxy like breaking out breaking a whole lot of dependencies factoring out this 300 line method into something more modular etc etc depends. And I would go out on the limb and suggest if there's a huge block of code that's 300 to 1000s of lines long. That is difficult for unit testing that we open an issue and say this section of code needs to be refactored so we can do unit testing on it now that's always easier said than done, but at least identify, you know, and tag it with something that says oh part of the running framework. You want to do something here's a section of code refactor it and your tests are that hey now we can test this section of code reliably with unit tests. Yeah but we might exceed the gift hubs limit on the number of open issues. Yeah that's that's everything in live right. Yeah, I did not realize GitHub had a limit on open issues. We might push them into creating one. But yes, in general yes absolutely that's that's what we should be doing that's what we should be aiming for. Yeah these are sort of pie in the sky goals. I wouldn't say this. I mean I'm okay with it. If we want to require tests and be, you know, more strict with the reviews. That's fine. But then I want more people reviewing we don't have enough people reviewing, because oftentimes it falls on the reviewer through, you know, write them or ask multiple times or you know, not people it's especially when finishing at the release. It's a big burden. So, you know, even if you're not a committer you can reduce stuff. What about. So, committers. Yeah, it requires a committed emerge something but what about if it what if the testing group could be available to add tests to things when they're requested so like, let's say you look at something and you're like this would be better with a test I don't know if this person's ever going to get to it. I would say just ping the testing group and the testing group is all committed started. So we're already spread pretty is it all. I mean it's not, I think, but or it could be. Yeah, I would say everyone in the testing group has a another group where they're probably a little more vital. I think it's Marius myself. That everyone should be in the testing group for that matter. I mean this is fair. I don't want to derail the conversation too much, but I'm a little saddened to hear that the template isn't going well as I mean I'm a person who used to write very like intricate PRs. I've been with lately, but I've liked the, I've liked the section where you're specifying how to test the code at least I mean I feel like that that seems like it's had the right desired effect to nudge people in a certain direction. And I would think it would help them to release people is it is it that is it the top two sections we don't like or is it any sort of template we don't know for whom is this meant, because I write it in a way that you know the shortest way that you can, you know, try it or, you know, reproduce the bow. And oftentimes it's anyway redundant with the tests included. Like, isn't there like we've included the tests isn't there just a checkbox for that, like, yeah, you check. I guess I get it right. Existing test coverage new tests or I didn't see this as one of them. Yeah, the goal was I mean yeah the original template didn't have it but then I was like, I just added a test I don't want to like rewrite all this stuff out so I think we added just like. So what I do is I check there's a there's a test here, or there's an existing test and I just delete the other lines. I would think the manual piece was there just in case you know the other ones didn't didn't go through it meant as like an if if else if else sort of thing. Yeah that's how I do it I just delete those lines usually if there's a test. So that way, if you don't have a test, you do have to write out the steps right. And so this is that's why it is encouraging people to do the, the automated testing I mean it's a small thing to write out how to manually test the compared to writing the test case, but it does, you know, at the little thing to nudge people in the right direction I think. I don't think I have a problem with that particular section of the template. It's actually it's at the bottom it doesn't get in the way and it yes it provides valuable structured information regarding testing. I think my annoyance was, as you said the first two parts, what did you do and why did you do it. Sometimes, the justification or explanation does not fit well in two paragraphs with two headings. So, if we eliminate that structure maybe it will be more flexible. So just one paragraph what did you do and why did you do it. Yeah, just justification maybe overview whatever call it whatever whatever but the template already doesn't jive with our recommendation of how you write a commit message right. So you know you open a PR from a commit, and you've carefully written your message and now you have to pick it apart rewrite it and it doesn't match your message anymore. But the PR usually contains more than one commit messages so it doesn't really have to match the commit message. So it's my academic past I feel like I'm plagiarizing myself when it's the same. Yeah, so I mean based on this conversation I guess my preference would be we merge those first two sentences and we put them in a like parentheses like replace this with a description of how to do X and Y, you know, be sure to include what you're doing and why you're doing it linked to relative relevant issues, etc. And obviously I added the stuff for the licensing stuff which I think is nice right now we're forcing everyone to MIT license their contributions. Yeah, so I mean I get it though because I frequently when I was ready PR is before sometimes I would put the justification first and sometimes I would put what I'm doing first and it sort of depended on, you know, just the story I wanted to tell. Maybe that's maybe maybe we should iterate on it rather than throwing it out, I guess is what I'm saying. I guess clarify some somehow clarify that section about testing that like just delete the sections that are irrelevant to you. I mean I think there could be good time savings for us actually also if we had the issue template. I think it's a bit more applicable there. It gives us the issues that don't belong on the guitar people. I should go to hell. I like that idea a lot. And the common things people forget to mention like trees are you on. Yeah, that's a good idea. I mean, yeah. I actually had the number I forgot to write it down I did do a comparison and I think out of the 369 issues that we had 150 some of them use the release template or that PR template. Marius by far was the biggest committer of PRs that use the template. So he has the most experience with that he also was the biggest defender of merged PRs that didn't use the template, but those were all one liners that you know, the get that that template gets in the way when oh you merged release into dev or bump the version, you know things like that don't really need. So about half of them use that template, or were the ones that didn't they just predate the template. Um, no, because I only now when did the template come out that was the result because I only looked at PRs since the last release test phase. I think we finished out last release process meeting so there were going to be some PRs in there. Yeah, template. So sorry what what john said maybe and maybe our template should be not a template fill in the blanks but a template as guidelines to how to write include this this and this and this. Then you're free to use your own format as long as you stick to the guidelines. I still want the footer in every PR. Yes, footer would be fine. I liked what you suggested about not here is the header and include this but rather replace this with for the top part. I, I'll do a PR to do that, I guess. Does someone want to take on the issue template. Any volunteers to start that going. Oh, no, all you. Okay. So issue template right. Yeah, got you mentioned the galaxy version mentioned help you with the application belongs somewhere else. What else Marius were you saying should whatever. I think that's the most common things, which version ideally which commit if you know it, which galaxy server you were using it seems like. Should we mention who it impacts, like, if we're fixing something, it should be lightweight right. Like, again, shouldn't go in the way to really just be as small as possible. Okay, what we're wrong, how to reproduce it, and the, the circumstances of this right. So, when I make issue tickets against tools. I'll make it like a P, like an issue at the repository and then I'll have sort of what caused the problem and if it impacted users or, you know, like, because we're making software we want people to be able to use it and like using it. And so I think sometimes you might want to prioritize things based on whether it's it's corn breaks galaxy, or it makes the experience better. It doesn't and then when I make a separate issue that's in the use galaxy playbook it. It gives, I give it's like user focus I put the work around but then I also put the development stuff so that it's how to reproduce it. And those could definitely become test cases so like I'll have exactly what the inputs are what the environment was. Like where the problem was and what it should look like. And then you then then you can just rerun those and if that works. I know that it's done, and it's finished and I wonder if we can sort of adopt that in the development area in some way or you've got sort of a root like a root like a place that shows where it's broken and then how you can tell if it's fixed or not. In an abstracted way I just tend to do my little scientific but you want to kind of abstract that to know that it's finished. Am I making sense or silence. Like what's the what's the expected behavior right. So what's like what the what the issue was and then what what it should look like when it's done. And so I tend to list all that out especially for small tool issues that are kind of complex right to define. There's an example it's got small data. It's representative. It shows where the problem is just great I do comparisons between different servers and releases, including just the local that I spin up. And then it's got some context and then when you re you can just rerun those really quick and then it's if it works. You know, the expected result matches and all of that is written out and that could be that could be a test right that could be if you're trying to fix something. No. Yeah, I mean. Again, we want to keep it simple I mean you know people just want to write something that's fine but like it just a small reminder please include the version that's usually the most common thing and for everything else. You know I mean the only thing we really need to react to is bugs and if the box I include the version or what it was it just takes so much more time. I don't know that it's free for everyone to write them but I also want to remind people to be concise because yeah it's too long. It's also problem. Okay, so it's so it's like context again that's like the context of the bug, I think is important to know about and that might help rank what we want. Yeah, but I mean if that doesn't come to me naturally I don't think we should force it. But sometimes the context isn't helpful in solving the bug right so if it's some tool form issue, some widget is wrong. A bunch of context about the scientific application or the particular tools. Sometimes those are helpful but then sometimes, you know they sort of shroud the underlying issue that might. Like a developer might be like oh yeah I just need to. These radio boxes need to be a checkbox right like. I mean, yeah. There's like a tendency also to put like many things in one issue and that's also not helpful because then you don't even know where to start, or what the actual problem is. I think you want to like, you know, you ask for too much context you're going to get too much context. Okay. I'm just wondering like, what, how have we found the original bug that person knows where the problem, like how it, how it meant how it was encountered and how it, the context of the usage I'm wondering if that would help us to prioritize what to test. Right if you knew, because there's certain things that are going to impact our users so they're going to make them like using galaxy or not like using galaxy. I mean users certainly posting issues. Right. I mean, you know, I mean expert power users. Yes. Otherwise, that should go to help. I think I mean like, I don't, you know, the monopoly on that opinion. This is what you know the first. Yeah, right. I guess I'm wondering is what I was doing for tools that have that are broken and have fixes. If that part of it we get to automate that gen part of it right where I'm describing what it is and how to how to reproduce it and how to tell if it's fixed. There's a way we can make that into it. The tool test right rather than somebody having to go click around and because I have stopped doing that and then it comes up again and then I when I'm going through and then pre release first shows up on main, you know I go through a bunch of things that I knew were problems before and I go back and check them. I'm wondering if those like, could I write should I write those differently. Is there something that I could add at that point, or that others who are reporting that notice issues that they could include. And I guess it is that goes back to the unit test but I guess I'm thinking about priority like how do we know what the testing team should be testing. If it's, if it's infrastructure. That's in one pile. If it's user experience that's another pile. It's, and I'm not sure if the G10 tutorials are going to cover everything because they're pretty simple. So anyway, I'll stop now because everyone has a confused look on their face. I mean, I think it's, I think it's valuable insight I just figuring out how to do that well right it requires a lot of coordination and such. I'd like to provide something and isn't as manual, if I report something, you know that's that I noticed because it's. I mean, yeah, maybe we should stop here but one thought I've had is that, you know, some simple tags like, you know it would be called like issue detected on main, and what that take could really mean as well. We need to like, you know, Jen or someone, you know the release testing team or maybe it's Jen, you know maybe it's because it's linked to an infrastructure ticket should we test this after the release or something. We need to figure out exactly how to do that and I mean a lot of the issues tend to be like complex beasts right with many different layers. So it's hard to say exactly when an issue is done and deployed and the user experience is fixed or not. Right. Yeah, because I've dropped out of that I just, I don't know if that's primary or hurting, and I do keep track of things to where I was until about three months ago I got overloaded. And things to retest in the new release just to make sure that they were actually and that might be a good issue or something that is part of galaxy itself not necessarily a tool. But you know, I'm like a year behind. And I close them out. And I'm wondering if there's a way that I could submit that in a way that it was easier to have it just be done and known, rather than a big long list with screenshots and click here click that do that here's a history here's a comparison history here's a, you know, we should, we should table this discussion for another day because it would be good to have a way to delegate that to the release team or to automation or whatever but probably we've only got a couple minutes here. Okay. So I guess maybe we'll wrap it up here. Thanks to Keith and the team and the good discussion. I'm stopping recording.