 All right. So, yeah, so the two community updates for the day is that last week there was these really cool progress updates from the working groups. The recording is available. It's linked in this document and after meeting will also link it to today's page on the hub. I'll put it in chat for anybody that wants to get the link now. So, the other thing, and this is open for some discussion, of course, is that, you know, about six months ago or so we had these monthly paper cuts as a way to address kind of small issues. For user facing issues primarily and all and then at Montpellier, meaning in May. The decision was basically reached to the paper cuts and revisit this idea later and maybe a slightly different format. So, the idea that's loaded to the top currently is to have like two maybe three times a year like a two day event, a virtual event, where it's basically everybody focuses on a handful of issues and works on those collaboratively. So, the kind of idea for how to organize this is to take the next month and a half or so and nominate potential issue issues in a Google Doc. It's frankly blank so awaiting contributions and lists on and then in the last few days before the event, they would kind of be curated to identify the ones to focus on so the give this some legs the idea was try to hold this in December 12 and the 13th it's a Monday and Tuesday. Everybody focuses on this event for those two days and kind of have it as an internal hackathon so it's unlike the paper cuts that were meant to be a community building exercise more so this is really like a team wide hackathon. And, you know, we see what the sort of takeaways are so any any comments to begin with. I would be curious how, how we're going to curate that largely based on the sort of two year roadmap to sort of things that are in my class week there was a lot of dependency on the back end working group from the presentation meeting to kind of make sure some of those critical pieces are focused on so it would mostly the the PIs would, I think, take a look at that at the end. Maybe the working group leads or in combination coordination with the working group leads and come up with with a list of order placed. There are no other comments or questions I guess we'll pencil this in as December 12 and 13 and give it a shot. See where we come out the other end. Any other updates from anybody else. Who captured the audience start sharing and sharing correctly. Yes. Okay, cool. I'll send myself before I hit share and then I couldn't find the button. Okay, so at GCC. This past year we had a birds of a feather meeting about the release process and kind of changes we could make to it and enhancements. We could make and just kind of general discussion about how we think it's going and can improve it. And a lot of people participated that's where kind of most of this came from. So my name is on the other part but I made John's going to contribute here to talk about the testing. So I just wanted to make sure that to acknowledge that. So, starting off a general overview of the release is right here in this checklist right so we can have the scaffolding of release of this this issue on the right that contains everything that has to happen to get the release done. So this is the new one we just created for 23.1. And, you know, it's public. It's a public document that everyone can look at and, you know, see where we are in the process of the release so hopefully this is completely open and people can follow. Rather the goal is that it's open people can follow the release progress and see what's going on pretty easily. So, okay, so basically we have some prep where we set up these issues and things like that and then the first thing that really kicks off the release is this pre freeze me. So in that we, well, yeah, okay, I'll get to that. Then we we officially branch the release that's what's, that's the freeze, so to speak. We deploy it to the various galaxy servers that that the sort of core group controls, and then we test it along the way we fix new and outstanding bugs. We create the release notes, then finally we create the release and announce it. This issue is live right now. So, you know, two of 60 tasks have been done. So there's a lot left to do for the 23.1 release. All here. So, Okay, so before we even freeze, I called it week negative one because week zero being the freeze kind of made sense for everything else. We have a meeting that's kind of historically been committers but really it's open to anyone who wants to attend, especially if you have, you know, something that you're working on that you care to get into the release. This is the point where we we look through. Here's a screenshot here but we look through basically all open PRs that are milestone for 23 dot for the for the for the upcoming release. So right now we have 72 open PRs. These either all need to be bug fixes or merged or reassigned to 23.2 before we before we branch. So the idea behind the freeze is that it's a feature freeze no new features go into that branch at that point. So you have a sufficient amount of time to test what's in the branch and get it to people, you know, in a well tested form without sort of rushing in new features at the last minute. Everyone's encouraged to attend basically we just pull up this list and walk through every single issue and go is this critical to the release are we going to get it in. Who's going to do it. Move forward. So for the actual freeze. Assume we've gone through that list, all blocking milestone PRs have been either merged delayed or closed. They're rarely closed. We just kind of bump them at this point. And what what the freeze actually entails is, you know, making sure all the branches are merged forward, and then creating a new release branch, it'll be release underscore 23.1 for the next one. And then you push that up. And at that point that's a stable branch and only bug fixes go to it. Dev continues on as normal but you know sort of the hope and the whole point of the freeze that with no new features going into the forthcoming release numbers are free to focus on bug fixes and things like that for the next, you know, n weeks. So, after the freeze we immediately update test instances both the test I got a project.org and the tool shed. Ideally these are kind of updated along the way. And there's not a whole lot of gap will kind of circle back to this later. Issue review timeline notes so basically what this bullet item bullet point means is, if there are any. So the release managers at this point should look at any security issues if there aren't, we don't always have them, which is, I guess a good thing. And make sure that someone owns those issues and that they're getting prioritized for for testing early in the cycle. And, you know, just making sure every single bug issue that is milestoneed for the release still has an owner. So someone's responsible for fixing it otherwise, you know, we go a couple of weeks and the bug's still there because nobody looked at it right. And we make sure that the release testing team has been selected. So yeah, at this point, everything that's milestone to 23.1 is an outstanding PR that's a bug fix or an issue that's a bug and all these things have owners. Okay, so after the freeze we this timeline still a little bit flexible we're kind of talking about how to best do it. We talked about it at the technical board meeting or whatever that was called. That was the last time. But basically, within about a week, we want to deploy the release to use galaxy.org and tool shed and the main tool shed as well. And then the release testing team starts within that first week. And john you want to take over. Yep. All right. Can you hear me. Sorry, my headphones are acting up. So the team is selected based, we've been doing it for roughly three years. We have a rotation rotating schedule in place and the team is selected semi randomly based on the schedule. So the team is usually six to five members from several labs we tried to have a selection from the main galaxy labs and ideally in ideal case it's a mix of developers admins and users. Usually we get mostly developers so we would like to change that in the future. So that members of previous teams of the the previous teams by design are supposed to be at least several of them are supposed to be present at meetings to ensure knowledge transfer so they're present in advisory capacity. Sometimes it works sometimes it doesn't but at least we try to ensure that knowledge is not lost from release period to release period. So it really takes a week we try to stick within a week. In terms of time it's three to four full days, depending on the depending on the release size. We have two team meetings, one before the testing period and one on completion and the results are typically presented at the following galaxy community call. Sometimes we miss it. And in that case we provide a formal report. Next slide please. So the overall goals are really we have to go. One goal is to uncover any bugs problem issues that have not been discovered by galaxies automated tests. And the second goal is to open issues open corresponding issues on GitHub and communicate with appropriate working groups to ensure that all the critical items that have been discovered, receive proper attention and are fixed, or somehow addressed before the formal release is completed. The testing plan has been evolving over the past three years. So this is what it roughly includes and these items may be modified for the next release. So we use GTN tutorials for scenario based testing so these releases include well tested and tried paths which a typical user will follow during a typical use case session. So we use that as guidelines to follow to try to discover issues. Second is we use a list of primary updates this is what's highlighted in the release notes these are the major features of the release and these we give a special attention we try to test them thoroughly to make sure that they really work as expected. We also have a list of PRs this is the list of PRs included in the release notes. These are more for loosely defined requirements based testing because this list is usually very long and we try to ensure wherever possible again that what's in the PR actually happens and has not been broken by by subsequent PRs commits. So the process and collaboration is straightforward we self we have a long list of items in shared one or more Google sheets we self assigned those. We shared one or more Google docs for collaboration that would be usually item specific discussions various notes. And of course we have a release testing channel on matrix for general discussion. Next slide please. Next test, we use the. We use the deployed release usually on galaxy.org for GTN tutorials, we try to note any issues which have been which are a result of the changes which were made in the upcoming release in the release being tested. We try to note those and open appropriate issues on on the GTN repo. Most importantly we try to stay away from the happy path or from the path which is expected from a user who would be following the really the tutorial precisely so we try to experiment and and stay away from that expected scenarios to find some hidden issues. The release notes. Again, this represents this represents the list of new things that should work. We use PRs which come with which come with instructions on how to test them manually this is done be we've been using a new PR template which includes a text checkbox which is checked. And the PR the description of the PR will provide specific steps steps which the release team would be expected to follow to test the feature. Another candidate for manual release testing is PRs which contain a well defined UI change or some fix so we try to verify the item we ensure it is present we ensure the functions of this is described. We try to stay away from the happy path we try to come up with use edge cases what what a user might do accidentally or by mistake. So, when the system does not perform as expected we, again, in the ideal case when it's possible we try to verify that the problem did not occur in the previous release. In this case if it's a problem we open an issue on GitHub and tag it appropriately. So I suppose that is that is the process as of now. Then onto you. Can I have a question on the. Yes, on the slide. How does the testing of these look like is it just like you open the tutorial in one window and you like follow it manually on main, or is there some sort of automation in place like Selenium. We, we discuss automation, probably every release cycle, and it needs to be implemented. For now it's manual, and it needs to be addressed. So automation is going to follow the happy path right. So a big part of this is actually walking through the tutorial and going, oh, that wasn't where I expected it. Obviously the automation is going to know where to find the right button, but if the user can't on the fly. That's the kind of thing that we're hoping to cut me you have to have a person in the loop actually trying some of this stuff at least. It should be in conjunction with automated testing. But I think the hands on testing is is has been super beneficial so far. I agree. I thought so depends but maybe you know if we get a developer working on this. Maybe they could actually do the automated testing if there's not yet one. Yeah. That's the hope and the intention. Okay. So then the next week release testing has completed. And basically we need to so most of the bugs that come out of this sort of thing are destined for the UI UX and back in groups. That's just that's where they go up. Fortunately, both of those groups have weekly meetings at this point. So in each of those weekly meetings for the next I think two weeks I have it. Yeah. Each of those meetings should be focused pretty much exclusively on fixing on the state of the bugs for the release, making sure they're assigned to making sure the people that are working on them have the, you know, tools they need to get them resolved and that kind of thing. And yeah, so developers focus on known and incoming bugs. And at this point we're going to shelve, you know, non release work to the extent that we have to write because there's no point it's not whatever you're working on now as a new feature isn't going to come out for four months. So. And at this point, goats will begin working on release notes. I'll follow up on this again later. In terms of the working group ownership stuff. So about a week allotted for that week three that should all be wrapping up bug fixes finalized bug fixes for both back into UI UX groups. Again, we'll focus on the weekly meetings and use those as sort of the clearinghouse for making sure all the assigned issues are resolved. I think this was pretty successful in the run after GCC. Then we kind of drag that out but that's something else. And then the release note should be finalized in week three. So we have two sets of release notes right the user facing release notes that are totally curated. Nice, you know, this is what you actually want to see in your email highlights and the really important stuff and then we have these automated generated automated release notes that get generated that we just have to continue to to regenerate as you know bug fixes and PRS come in. And then week four we ship it. So when all the blocking issues are resolved, the PRS are resolved and the release notes are fully updated. We're ready to go. So, I didn't want to dig too far into this but basically, this is all automated we can say make release create, you make a couple of PRS and then the release is official. That's, that's not new to be clear that's you know older stuff we have to update it a little bit but the release processes has a lot of automation to it that's really nice that that's all linked off of that issue that I talked about earlier. And then we announce it so we make sure that the release is listed on docs the galaxy project.org, get galaxy.org has to link to the right new release. We make a hub highlight posts that's a news item so that everyone can see it. And then we tweeted, tweeted post it and email it to everyone, we know. Okay, so that's the overview the release process that's that's how how it works. Now I kind of have a sort of a mixed bag of topics we've talked about changes we've made stuff like that. So, for 2209, there is no 2209. 2205 was huge and amazing. And it came out so close to when we would freeze for 2209 we decided not to do it. Just because the overhead for both developers and employers, and the, the, the narrow gap between what would actually be in 2209 and 2205, you know, it's not worth doing. But it gave us a good point for breaking off and saying, okay, based on our conversations at GCC, we're going to overhaul our naming schema. This came out, you know, every time we had a release that was behind a month or two, which was not uncommon, as folks know, people would be like why is the January release coming out in March, why is the, you know, so part of what we decided to do was just break away from the month naming scheme. We're just going to use numbers. We still want three releases, because that's a, it feels like about the right amount of content to ship to the players and whoever. Well, those will be sort of an early, we have to have a release right before GCC, right, because everyone's working on features and things like that you want to get out before GCC so that point is kind of fixed the sort of May, early June or whatever depending on when GCC is, we've got to have a release about six weeks or so before GCC. Based on that, we will probably have, we'll probably still have like a January release, a May release in a, you know, October, September, whatever release. But now we're not fixed to those particular dates and we can, we can orient those around features. We can have a stability release if we want, without having to predict, you know, if it's going to be four months, six months, whatever between. Yeah, and we can shift them based on events. If there's an event we want to get stuff out for or whatever. That's just a little bit of flexibility. I was really hoping to have this done for for this call but two kids and a wife with the flu and I think I've got it now. So that's that's been my week, and it's not done yet. But basically, the way we generate that big release gap the release issue is it's all you know plain text that gets posted to a GitHub issue mermaid has these charts. You can automatically generate Gantt charts with just this text you see it. Of course I can click it with the text you see on the left, and it generates this cool chart where you see visually, you know what what blocks of time belong to what things and you know how it all flows into the release, what tasks are dependent on other tasks and that sort of stuff. And we can automatically generate all of this. I was experimenting with it and it works you can embed it straight in GitHub. So, for the 23.1 release will have, you know this chart at the very top of the issue saying, you know, kind of like this is the scaffold for for you know this this is the this is a visual picture of the release. So I hope to have this finished by next week but we'll see how that goes. Yeah, it embeds right on GitHub and it looks really cool. So release management we decided at the at the buff that we want to committers assigned per release with the plan to occasionally rotate. Marius has been doing basically an amazing heroic job for the past several releases. And that's an untenable situation. So we're going to try to rotate this time around myself and and john will sort of wrangle things. But what we kind of you know this came out of the testing team doing such a great job. If we can find large components of the release like the testing to delegate to particular working groups. I think relying on the existing working group infrastructure will help us a lot. We'll have a couple of point people just basically attending all the meetings and making sure the dots are connected for this next one, maybe to, I don't know, we'll see in January how this one went. We'll have john and myself. Doing that role. But then you know so the release testing all the release testing will be done by the testing Harding working group so they will own that whole part of the process. All the release notes and announcement. You know all the tweeting all that sort of stuff will be completely owned by the goats group so the, the whoever's doing the release management will attend those meetings and talk to them and make sure that you know they're on top of it basically, but hopefully the release manager doesn't have to manually. The thing is that the release manager doesn't have to actually do all of the release note editing and that's not saying that that's what's been happening. Goat's group has been super helpful but I'm just to sort of formalize that they're the owners of that part of the process, the testing is owned by the testing Harding working group that sort of thing. And the other component right, all of the bug fixing. So Mary's you've done a hero cat for just fixing all the bugs yourself in the past. Hopefully we can just push those to the working groups and have them, you know, distributed with our weekly meetings and that kind of thing. That's the plan. We'll see how it goes. So another big thing we discussed was using the use galaxy branch to basically have a rolling release on main. We've been waiting right up until release time to update to 23.1 or whatever it is. We actually main would more closely track, you know whatever stable thing we have, we think is in dev. It's obviously going to take more administrative effort. It'll prevent things like, you know, we've pushed into history to main and realize everything's falling over because of performance issues or something like that right. So, we don't have a formal plan in place for this, but we talked about it at the, you know, the buff and everyone seemed to think it was a good idea. So this is something we're looking to do sooner rather than later. This comes at the cost though right. Yeah, yeah, more more administrative effort we made deploy bugs. There's a support burden as well potentially. So if we deploy some new feature that's not, you know, well discussed or whatever. People might be like, Hey, what is this thing I don't know what's going on. So thinking that it would like it will paint us in like a stable light, right, like use Galaxy will be more sort of buck prone and less reliable because we're using it as a more of a test bet at this point. I don't imagine it's going to it wouldn't run dev. I mean, at least what we talked about was sort of dev minus couple of weeks, at least. So stuff that that's been run. You know, on test I guess project or etc. But in the main point is, we don't want to deploy four months of changes all at once to main. That was the hope I mean we this is. It's what we talked about we can always change it. But creating the release branch early. And using the release branch as what runs on main and picking things that are deemed, you know, putting things from dev to the release branches also the sort of week one week one of post release. Cherry picking particular PRs across or I mean I think any. If you try to, you can do that with smaller stuff. But I think at any scale, merging. At significant scale you're going to want to just merge the whole branch, right it's going to be a nightmare to try to pick, you know a handful of branches or PRs that you want to deploy and a handful that you don't. Another possibility might be to have an additional, I don't know, source from which, you know, we can have a sort of unstable main. So we would have another DNS another entry point that normal users maybe wouldn't know about, but then, at least we're running against mains config, the downside being that we don't really know how it will hold up with traffic. So you could do that you couldn't do it with database differences though. So anytime there was true, I mean, hey, at this point like building another test instance. Right. Well, I mean, or we make tests more useful. Can we do, you know, there's probably a lot of automation that would catch some things like performance issues and stuff right so doesn't make sense. Yeah, well there is there is a plan so Simon had a cool poster about how they use Selenium for testing performance at GCC. There is a plan and testing and Harding group to try to do something like that. So yes, but you know as a part of the release to I mean, do we, I guess the question is do we think it's going to be super unstable to try to do this. When we talked to GCC it seemed like we were mostly on board based on my notes but so could we, for example, make a copy of mains database, right, and run that someplace, but have any new files created in a different place, right. That's difficult. It's enormous. It's enormous. I mean, it'll take a day probably to make a full. And it changes every split second. I mean, you can do from a point in time, and actually we have nightly backups right so the easiest thing would be to restore from there but it's, it's very large. Yeah, I know test doesn't get very much use right right and certainly from people having large histories and things like that that's not too often happen. Right. And so it's, we need to test like a real production instance with these things. Yeah. I don't test compared to make and we incentivize people to use test more by giving them larger quota. We could potentially do that yeah we so right now it's, it's almost unusable right I think it's 10 gigs. And we could certainly up that although the amount that they have on main is already not very useful I mean if you're just a regular user of galaxy right what incentives, unless it had more quota and more resources than me what incentive would you have for using the less stable service. Yeah, why would you unless it unless it just happens to have some feature that's not released yet that you desperately want but that's going to be rare. I mean, I want to point out that for the most part, when we do database migrations. I mean the current code needs the migration but most of the time I mean it's quite rare that these are incompatible sort of we can upgrade and old code can run it but not the other way around. Yep. So I mean, I think, having another entry point to the same database and same object store just running a different version on the web hender would probably be fine. It doesn't have the same traffic so you know, I'm mostly thinking about the new history problems we've had. And you know I mean, we should, I mean, it's, it's also easier when we deploy less changes at once. This I think was, that's where this came from question mark. I think we need to. We need we need to have the ability to roll out changes for a brief moment to see how it holds up with traffic. Right, keep a test instance and production separate. Right. I just, I get scared sharing data mean that's the old model right, but I think a lot of places just roll out a few changes, check how it holds up and then take them offline again. Do we do this on a Friday afternoon. Yeah, I mean the process needs to be there right. It my main concern is upgrading and downgrading the database and migrations that we add during the dev cycle and then remove because we decide not to deploy those. So if you do even if you do upgrade the database and then run the old code and the new code simultaneously, right. I guarantee that the upgrade works with the old code before you do it and then there has to be a way back from that. I think there's like a there's a lot of problems here too of like tool state, or the way the job directory is laid out we have so many like to do remove this code at some point. Oh crap we did this wrong, but it was in galaxy for a year, sort of things in the code base and the more times we are deploying less tested code to main the more I think that sort of thing will crop up. Okay, but then we need to invest. I mean, I agree and ideally we'd have a good load testing suite but this is really complex to get something that would actually catch real bugs. And then we need to put some engineering effort into this. I think that's the alternative. I mean I think it's worth the investment of time, you know, I have no idea how complex this is what it sounds crazy complicated and but I think that's okay you know if we're doing three deployments a year and you know, like, so much of the, you know, I would trust is built into, you know, our user base already that it's just like really high risk. I think, I think, but you know maybe I didn't understand this presentation correctly but usually, you know, of course there would be like a staging environment. And all the testing would be done on that it would mimic the anonymized version of the production database somehow again not an easy, not a small lift at all you know I mean it could take a year to plan for something like this but it might be. It might be worth it. So we have tests. The problem is that it doesn't see a lot of users right test we can update at will. There used to be maybe still is big sign on the front that says the test is for breaking right you can update it anytime you want with the newest code. The problem is that nobody really uses it we even run PR branches and stuff on it like, yeah, yeah. Nobody uses it. It's very small. So, yeah, it doesn't it doesn't crop up these issues with, you know, gigantic joins and seek searches and stuff like that that work fine on a small database but on a big one. But you know, yeah, that's making the case for having an instance that runs a copy of main database I suppose. Yeah, yeah, that's kind of that's called a QA node, like sits between tests and main test and prods so you have a note more similar to main, but you know it's another stop point for the code. You would still need to have users actually testing it though is as part of the issue here as well but that's kind of too prompt in the sense that we need to have reasonable load testing but in order to understand how we can. What critical paths we need to hit in order to know that there are no regressions you need very intimate knowledge of what the moving parts are. What was changed. Instead of a rolling release, you know, minus two weeks kind of window thing what if we had sort of a mid release sit down of all the, you know, sort of technical leads or whoever folks invested and just deploy at that point if we think it's good you know just basically we want to we want to reduce the delta. Any kind of QA node or whatever I mean that sounds like a ton of engineering effort that we don't have time for right. I mean I think I guess that depends right so the more often you do it more stable process becomes and if you say, we're also going to do a 23.23456789 1011 12. You can say, you know, the players shouldn't deploy this. But we were going to do this, and we run through the major things so I don't think our users have a problem if a job fails, I don't think our users care as long as we don't lose their data. You know, you have a reasonable uptime. And so, if we run through the tool tests, the work protests, we know nothing major is broken. You know some user interface things can can break but yeah I mean I think that's the alternative. I mean I think it seems safer in the long run right you we were saying you know, we don't want to deploy untested code to main but that's, that's effectively what we do at the freeze and we do a whole lot of untested code at once right. The only difference is maybe we're doing two weeks at a time or three weeks at a time whatever. And I think my hunch is that we can actually do a better job managing expectations instead of having just two horrible weeks once we deploy to main by doing something like what we talked about. I feel like we used to deploy production level things every two weeks and I feel like every time we've lengthened the amount of time that we go between releases. I mean, maybe people don't, I mean, most people are. Right. It's just that they come from the test. I mean it was, it was when dev was master main right like it was, it was bad times in galaxy I think the stability was terrible. We didn't have the test suite that we have now. So presumably it wouldn't be as bad but I feel like I mean to push back here like I feel like every time we increase the length of time between release, we actually end up with a more stable product. I just, I mean from the time of freeze until we deploy or until we say it's ready I agree. But then we get six month release cycles that was the, which which have been really stable. I continue to not see the problem. Well I guess I'm trying to I guess well, I shouldn't say I what we were trying to sort of untangle I guess in this, the conversation at GCC was more of, you know, people can obviously run whatever galaxy version they want what how how do we get should main wait on the formal release or not I guess is the first question. No, I mean we've never done that anyway. So we don't do that. What should the granularity of deployments to main be, should we do the massive change sets or should we have a more incremental rolling release approach. That seemed to be what we were leaning towards at the time. Obviously they're definitely concerns with that too but yeah and I wasn't at that meeting and you know I'm happy to like experiment right I'm just sort of my my experience and God says it's a step backwards but I generally also agree with that there's a huge risk in this. Because there's also a cycle within this. So some people merge or want to merge difficult PR is in the beginning of the death cycle, and then spread it over that time and try to estimate when it ends and leave themselves. They freeze their projects earlier and so on. And if, if we break through and deploy in between those actual development cycles, there's a risk for sure. Yeah, but we don't want these developments at all right. What we merge should work. Some projects hopefully we can like split all these projects in the future into these small chunks, but still some new projects start with bigger PRs you can or not bigger PRs but changes which cannot yet work by themselves entirely always. But maybe shouldn't be merged. Yeah. I mean realistically, like pragmatically we do do what Sam suggests and like, even people who are saying we shouldn't merge we shouldn't do that click merge on those things right like, and myself included I mean I'm. I mean, in what way that they are there and Jesus came on into the sharp knife. I mean, no, we don't do that. Well, a little bit I mean if it's early in the release cycle and we know we have time to, you know, follow up and polish and fix. I'm definitely more likely to click merge on something if I, you know if it's early in the release cycle rather than later. Or I mean if this happens with PRs that grow and grow and grow you're like, Okay, well now we can't pull this thing apart. You know hopefully we can get in early in the cycle and then fix it. Right. I've clicked merge on things and been like, Well, I need to take my afternoon in the future to go back and write some tests for this and the polish it off. Like, I mean, I feel like people who review PRs often will, or have a list of things like hey can you follow this up after the merge that happens and It's self merging should be bad. Someone else should be merging the code and if, well, I mean, No, you can't, you can't You can't self merge without an approval. If someone else approves it, they can see the only then we have a way to stop these scenarios. No, but the only way someone Other people's PRs that I merged and wanted to follow up on like it's not. Yeah. I'm not merging my own PRs early. Okay, so this we could probably talk about this for hours. Maybe okay so maybe for now the what we talked about a GCC the sort of two week rolling release cycle is way overly optimistic. That's the feeling of getting based on the conversation we're having. Maybe as a first pass at this we could try Well, actually based on this release cycle I don't think we can do it for 23 one. So for 23 two we could try to plan ahead and have some sort of The technical leads meeting or whatever look at the state of stuff and figure out a good point to try to pre publish on main if that makes sense. And to instead of having a two week cycle just try to bisect the release at first ish and see if that helps With what we've deployed to main Obviously we have to pick a point and then if they're important fixes we would continue to merge those into the use galaxy branch though. And I would add to one like one more thing is that I personally think main should be as stable and arrow three as possible because I think it can be really annoying for people to use it. Of course I mean we do our best we got much better testing is much better and so on just but to add that that it's not a light thing to break main It's not and just to clarify again this the rolling release cycle was intended to reduce the bugs on main right so the whole point of pushing four months of stuff all at once is what we want to avoid. Yeah I actually got so long that I think the more longer rolling release could be something very much good to explore. I mean I think this being a necessity for me came out of the new history work like I mean it was impossible to entangle it from making the production switch to past API. I mean we had to look at life stack traces to see what's consuming CPU time I mean we want to avoid this right the smaller the changes are the more focused we know we can look at what's going wrong. And the faster you can fix the bugs that came in in the last two weeks instead of you know having four months of bugs to fix all. Yeah and you're already forgotten what you did yeah I mean ideally you have tests and everything so well documented that that's not a problem but you still need to get into the headspace so that takes time and. And you have less compounding stuff so it's easier to find the root cause. That's the hope I mean. I don't know you can totally argue this from both ways I don't recall. I don't remember it being. As quite as bad as john says. Deployers definitely don't want. They don't want more releases right you deployers if you talk to people that run small galaxies or whatever they want like they're one release a year two releases a year something like that. Every time they update something you know they have to deal with an issue of some sort. But that's that's not us running main right that's different. And I think we can reduce the issues that deployers see by doing something more like this. But again I mean we can this sounds like a conversation that should probably continue down the road. And we shouldn't try to like deploy the main today or anything. Martin if you want to do it. Okay, thanks everyone for all the conversation on that this is clearly something we should think more on and. Yeah. So wrapping up in the last couple of minutes I guess so we definitely want to have point releases this is important. For a whole bunch of reasons but basically we'll publish 23.1 will have 23.1.0 is the first one then we'll have 23.1.1.2 whatever. Each with each one of these point releases. Instead of saying yeah just pull the latest code from 23. From the release 23.0 branch or dot one branch. That's the type of we're not doing 23.0. We will be able to have you know a specific set of bugs that were fixed between dot one and dot two you know and so on so we can tell us a deployer. Look to fix the bug you're talking about you need to update 23.1.3 or whatever so there's there's not a whole lot that has to be built out to do this it's basically ready to go but we're going to start doing these with 23.1. I think this oh cool I didn't have that much more. So open questions on the release scaffold currently we have the Docker release. It doesn't seem like we've done that in a couple of years now. And I was just wondering. I wanted to ask what the state of mean should we just drop that from the release scaffold and and the Docker galaxy stable stuff is not happening or. I mean we've effectively already. Yeah, that's right because. Okay. Yeah, exactly. Okay, the other thing kind of related. Is should we add the Anvil Kubernetes deployment cycle the release cycle for that to the official checklist and add you know owners for that that set of artifacts right. I would think yes but okay yeah I'm reading the chat pool. Backing up Lucille yes. Same branch just a new tag and GitHub. Yeah. Regarding the version numbers if you're actually trying to adhere to semantic versioning the standard it does have to be 23.0 not 23.1. Yeah, we're but we're not it's because there's not going to be so between 23.1 and 20. Yeah, I think we decided that we. We don't want to we intentionally didn't want it to stick to Sember because that's not how it's going to work. So starting with dot one made. It's sense if that's that's my recollection of the discussion at least. We intentionally here it doesn't really like I know none of it makes sense. It's the galaxy versioning schema gvs that we're going to use. I think it makes sense though for us right. It's not Sember. It's not month based but it gives us a way to say there's an early get there's their end number of galaxy releases a year probably three to start with and then there are point releases between those. Yeah, okay, I mean you could also just do your month and point, but the month is not fixed until it's actually released. Yeah, I mean, I can release or confusing month with release names bad. That's the one thing we've learned. We'll have something that's like year but then arbitrary number afterwards, you know, that gets reset every year, year seems to be a much better time scale for us. I guess it's the. We can hit the right year. The last release in a year but yeah okay. Let's just version you you ID is really innovative here. Just a big hash. Should we just should we just vote one more time on this if it's 23.0 or 23.1 because now with this argument. I don't know if it's better in alignment I might be for 23.0 but anyway, it was not an argument it was a question. I mean, until until we ship it. I mean, I mean, I mean like the yeah that it's it's more standard version. Yes, we can. I opened a PR this morning to update the existing release from 22.9 to 23.1. If that's not what you want to do comment on the PR, we can. Yeah, is that fair. Yeah, of course. Yeah, okay. Okay. So, okay, so this was a resounding yes we should integrate the annual Kubernetes stuff to the official checklist and make them owners of that them I don't know who even who that is that's a good question for right now who who should be the point person for that. What working group or Okay. All right, we'll I'll follow up with you on that then. And so the the last thing I had was the current timeline for 23.1 or.0 as it turns out based on the PR feedback. We wanted to freeze in early December to still comfortably comfortably release in January. And knowing that there's going to be probably two two weeks in the middle of that that you know people are going to be gone. Does anyone have feelings on that. So freezing, I think I said 1205 so December 5 would be the freeze for the first or the anticipated freeze. Does that seem realistic. That's that's everything I had. Thanks for all the discussion on this one I was really hoping this would be more discussion and less just walking through stuff and it was. I'm going to stop the recording here. And we got a few seconds left in the hour.