 Okay, we are live. Hello, Justin, in case you're watching this recording, this is a perspective for recent Jenkins LCS releases. You may have seen that between Jenkins 2.3 and .5, we had a number of progressions. And after that, we had this pattern in the developer my link list and agreed that we want to review our processes and to discuss what went wrong and see how we could improve and how we could improve quality of the next releases. If you're interested, you can find this conversation in some ways in the Jenkins developer my link list. It's here. You can find all the links from there. And here's our Google Gov. And then the main objective today is to discuss this particular section. Is it correct? So Oliver, Rick, Daniel, is what is your agenda today? Or do we need to discuss anything else? Yeah, I guess the retrospective is all we got to discuss. So, yeah, we do retrospectives quite differently but it's open source and I think we could just discuss the feedback we received. And if any feedback is missing, let's just put it to the bottom so that we can reach it during the meeting. Okay, so the first item we got improved weekly releases quality so that versions don't get fixed in LCS in the last minute. So it comes from the history because the main regrettions which led to the instability actually integrated early in the weekly cycle but we didn't notice them until very late phases in the LCS phase. So for example, yes. We notice them as regressions and we sort of acknowledge them, we just didn't address them. Okay, yeah, fair point. So, let me rephrase, I mean that we didn't, basically what you said, we didn't attack on them. So two regressions were basically introduced in version 2.205 and whatever happened with the back parting process, we didn't really work on them until February. So it was quite late and if we fixed them earlier, we wouldn't have had this problem in general. And for that, yeah, I think that we should just double down on weekly release quality. One of items which is definitely missing is Jenkins Core 3H team because right now we do not do regular 3H of incoming Jira issues. So all the things get noticed eventually but now the regular 3H looks at the issues. So, I think that it would be one main follow-up and I haven't mentioned item to recover this team but I'm not sure whether this team would be enough. Are there any other suggestions or comments? To me it feels challenging to actually motivate these people to be allocated to this, right? Because everybody, let's face it, everybody's got an interesting, more interesting project to work on and we're all busy. So it's very hard to get somebody's share of time on this and getting people allocated solely on this. I mean, it's very challenging to keep the team motivated and doing their job in the long term as I see it. I totally agree with you. So what I was thinking about that, that we could have just one contributor but on a rotation basis. So historically one year ago, we didn't have so many people contributing to Jenkins Core 3H. We were contributing to Jenkins Core but it changed. So we expanded the Jenkins Core team and now we could probably agree with this, within this team that somebody just monitors and comment fishes for one week and then somebody else and this is how we could balance the load. Okay. So I was monitoring all incoming fishes during the time frame of JEP 200 changes we had a couple of years ago. I don't think it takes too much time to monitor them but having one person who takes a look, let's say every day would be really helpful. And the next step would be to actually follow up and to define an action plan because initial response is one thing but we also need to plan a fix. But even initial response and maybe some highlighting for LTS would be nice. So for example, what if we discovered this issue? So right now we put LTS candidate only when we have a fix in general. What if we were doing it in the very beginning of the process or maybe using another label to highlight that this issue might impact the previous LTS and it might be generally a big regression we want to fix. There are several currently open issues that are labeled LTS candidate. Do we need anything else or are we just finding these regression or LTS candidate labels? So the expectation is we would prioritize fix on the things that are labeled LTS candidate, correct? Yes. I mean, what would that accomplish? Ideally we want to fix all regressions. Well, it would be a little steep. Especially in this case, the problem is LTS candidate is for things that are expected to be back ported. And the problems we encountered were reported in the weekly release just after the last LTS baseline. And so for its first three months, this would not be a back porting consideration at all because there were like 18 weekly releases during which a fix would be integrated into the weekly and would go into the next LTS baseline like that. So that is not really helpful. I think we should look more at the regression label because there aren't many of them. I have a G-RA dashboard that shows like five regressions and 10 really popular bugs based on voters and watchers and it's super trivial to look at that once a week. So basically you say that the current labeling is fine and we just need to look at the dashboard maybe take priorities into account and that's it, right? Essentially, yeah. I mean, I can share my dashboard or a new dashboard with the same filters with you. It's really not a lot of issues that are candidates for increased attention. Now, ideally we want to fix bugs in general but there's a difference between a bug where the only person who cares is the reporter and the bug that has like 15 watches and 10 votes. And there aren't all that many of those. Yeah, I'm ready. Okay. So it would definitely help. And yeah, we have for this issue is who would fix the defects. But yeah, maybe this expand the Jenkins call him or have a better situation than now. At least in the case of these issues when I highlighted them, JC fixed this one almost immediately. This one, yes, it was basically fixed by the PEMDA board but we also did some analysis and testing quickly and it happened only when these issues were highlighted in the discussions. So if you highlight them early, maybe it will be enough. Okay. Anything else about that? Or should we move to the LCS process part? And this would be the most important thing in this discussion. Is there an action item to create the dashboard for the core team or whoever has free hands to help? I can take that. Okay. Let's do that. So it's basically to publish the dashboard. You already have it. I have some more stuff in mind. I will just create a dedicated one. Okay. Yeah, I also have some queries. I used to run on a regular basis before. So we can expand this information. And right now we have documentation in progress in the Jenkins repository. So we could just dump all the links again. So there is a pull request for maintain your guide. So you need to go to the second page to discover full requests which are two weeks long. So I think we can just put this information in the maintain your guide without spending too much time on additional documentation right now. And then we can expand it. Thanks, Oliver. I put action items there. Maybe I will even embed the dashboard on the Wiki page. It's possible with markdown. It's possible with a ski dock. I'm not sure how GitHub would render that. Yeah, do some experiments. Okay. So the next item is about discovering issues. So one of the reasons why we had these issues was backporting. It was related to backporting of 5738. So it was a relatively minor regression. So I don't consider it minor, but it was reported long ago, and we didn't have so many votes. But we backported it, and in parallel with that, we also backported the entire Winston upgrade. So there was a comment that, yeah, I will mark it as LCS candidate, but it provided some backporting with whatever regression and 5.6. So basically instead of custom Winston version, the entire Winston was backported. Again, there is no blame at all because there were multiple reviewers who were supposed to take a look, and everyone missed that. So it's something we just need to review. And here... So what would help to discover such comments and talk from them? Yeah, it was absolutely misunderstanding on my side. When I reread it after you pointed me to that, it was absolutely clear what you find out. I misunderstood that at 5.6 is sufficient, and I guess that's exactly what was reported, and we haven't bumped as far as we needed to. So yeah, that's a misunderstanding on my side. Yeah, it's a bit challenging. I do do diligence to actually reading all the comments and screening all the jurors before something gets backported, but obviously this slipped in. So I'm definitely open to discussion about what to do to make this easier to spot and harder to miss. Yeah, so my suggestion was to just add another label. Well, it's a quick win solution. Definitely, your proposals about pull request is also something we need to discuss because it could help to get more reviews. But what if we just edit labels there, or maybe would it help you, Oliver? Or would it be also a problem? Yeah, I mean, it would definitely serve the purpose that extra care needs to be taken, but the thing is that there is still the human element that as we know can fail us, right? So yeah, I'm wondering how to do that. I'm trying to look at it from a different perspective because in the vast majority of the cases, on backport and fixes, I haven't altered. And a lot of the fixes I have backport is actually made by the folks that does the reviewing and probably would be interested in doing the reviewing of the backports. So how about getting the owner or the assignee of the issue that is being reported to actually have it reviewed? I mean, I understand is, you know, puts the burden on the other people who are supposed to review the backport, but since they understand it the best and they understand all the implications and things that might be very hard to share in plain English. Yeah. It would be definitely important. So, and in this case, it's definitely a mistake on my side because I was the one who submitted a pull request to bumper the dependency. I was the one who marked as LCS candidate and I received notification when it was backported. And basically I was busy with other stuff, so I didn't really review that. So if we could ensure that there is additional review notice and that we ensure that the reviewers really review that, it would be nice. I was thinking that the best way for that is just to do it through pull request and that is your comments here are basically the same. So have backporting through pull request. The question is how to do that. Because your preference is to have one bulk pull request which does initial backporting and then a number of minor ones if we need to pick something. Yeah, my proposal really was that we would as a community to cultivate the content of a single pull request and then when we are all okay with what's in there we just merge it as it is. So basically the work in progress would be isolated in the single pull request. So that's what we're working on right now and at the time we merge it which actually represents the agreement that this is what goes into the next release. Obviously if we forget something or discover something later we would just create a follow-up if we absolutely need to. My proposal was that it would really be a single PR for a single LTS branch. But then it was Daniel with some corrections and good points. So I want to introduce this problem when we want to remove content. The previous review becomes outdated. Yeah, it's definitely true but at the same time it's better than the current state. Yeah. Well the current state is directly committing to the branch after locally applied backboards, right? I think this is a false comparison because another option would be to actually have per issue, per feature of PRs into LTS next to the other ones. Since we introduced the label into LTS which we can always rename it would be easy enough to distinguish LTS related backport pull requests from the others. I don't really see the benefit of having one pull request for benefit although Oliver makes a good point that it would allow us to basically already have the LTS in a separate branch because this would probably be an original PR anyway based on an origin branch. And so we have the actual final LTS branch and we deliver from one other branch into that. So I can see it going both ways. Another advantage of this approach is that basically it requires less time because if we start creating multiple parallel pull requests then firstly you want the hook to resolve potentially much conflicts if they happen then whomever does the backporting has to spend more time to submit pull requests separately. So it includes some overhead here. I mean I think if we want to start with one pull request and learn from how the reviews there work and such that's fine with me. And I mean depending on how Oliver has time or whomever would open that pull request we basically there's a fair likelihood that we have most of the stuff in towards the beginning of the backport window which would allow someone interested in reviewing to submit a review even after it was merged so we can always amend the branches needed. Yeah I don't mind being the one creating the branch both initially so we test how does this really work or even in the long term. I mean I presume this all becomes semi-automated so even the pull request creation with some template and mentioning necessary people and labeling or what not can be done with automation so I don't expect that would be that much of a problem and I'll definitely agree to start doing this. I guess current core pull request reviews will be fine to dedicate some time to reviewing and we can request reviews from whomever contributed fixes. So I believe that the most of our patches come from people who are members of the Jankis Github organization so everybody can just request reviews directly. Yeah that's right perhaps a little bit of a challenge would be that we basically backport based on Jura that we have the Jura IDs. We have access to the commits inside but actually identifying the particular Github handles for the users there will be some mapping or some challenge to overcome. Yeah so right now what we are doing so if you go to the Jankis IO repository there you can see that for example for weekly YAML we updated our tool link to include in the bottom so our tool link includes authors of pull requests right now we just don't expose it in the changelog but we can use this metadata for example in backporting tool or anywhere else to generate a list of review requests if it helps or you can just take a look finally because this information is injected into all the recent releases and we will be improving the visibility of contributors later. So yeah even if this should be a manual step for now or manually look up of the username that don't traditionally contribute I mean I still agree this would be a help to the process. So if we could start from there and then basically all the way depending on your preference and initial feedback we could switch to this approach it definitely includes more overhead but if you feel confident about it we could try it out but yeah just starting from here would be a great step for me. I would have thought the second one would be lower overhead for reviewers as you're just reviewing quite a small change normally and then it gets merged as soon as it's ready but it's more possibly overhead opening all the pull requests it depends on how much there is to be backported Well it's mostly overhead for those who are doing backports because you have to prepare all branches you have to submit these pull requests yes technically we could automate that but automating that would also require some time 90% of times it will likely work but still work it's complicated but we could try it I'm just not sure about now and one note that so Tim you said that you don't want to block so basically a fast integration right so you don't want to go through 24 stem out or do we want to apply for LTS? Yes I wouldn't have thought there'd be a rush unless it's needs expediting so why would there be a rush this is the large PR for the LTS isn't it? yeah well for the large PR I don't think there would be any rush okay so anyway it's a detail I can just put it here that the LTS bridge lines do not apply to the documentation we have and that's it Are there any objections giving it a try with the .2 release which would be starting backporting next Wednesday this time sorry next Thursday I guess Sounds fine I'm fine as well okay so I'll take that okay so I think we can just start from there okay and yeah the next time probably we could go back because we switched to pull request reviews do we need to do additional markers or whatever if yes additional contributors contribute to pull request reviews do we just rely on this process and don't do anything with JIRA for now sorry like I'm not sure I'm following you you mean if we change this on the GitHub side if they change with somehow propagate to JIRA no do we need any changes on JIRA in parallel with what we do with the GitHub well I don't think so I guess at the time we merged the pull request somebody would have to go and change the labels from candidate to fixed in particular version which I guess would be where my integrates that again since I'm the LTS leader I'm fine doing that I've been doing this before so it just slightly changes the time when this is being done but yeah actually you got me thinking that perhaps it would be good to somehow indicate in a JIRA that this has been put into the pull request so owners and original contributor and I don't know people that are watching for that would be aware that we are considering that and can be part of that discussion because it would be detached so yeah that's a good point yeah the difference pull request from issues I'm just speaking that's already the version number dash fixed label right because if there's a single pull request collecting all the back ports there's no question that this will be merged and at that point it's just sort of a sore soft reviewable LTS branch itself but I think the question of this section Oliver it's more about do we need special indication for example if we can already tell in advance that if you backport the specific thing that is the LTS candidate you will also need to pull in something else like the suggestion here that Oleg had marked the non-trivial backporting or recently I remember I nominated Tim's read-only job configuration page change as an LTS candidate and there are linked issues that would also need backporting if that were accepted yeah I was referring to that so yeah my suggestion to this was I mean probably not very systemic one but suggesting that there will be more eyes on the process so these kind of issues is less likely to slip in if we I mean I don't have a strong opinion on this I sort of suspect that even if we add this label it doesn't necessarily make sure that whoever does the backporting doesn't miss an important piece of information somewhere in there right so I guess this would be very very structured like if backporting A B needs to be backported as B perhaps we can somehow encode that right I don't know either in labels or links or whatever and then we can actually have an automation that would keep us accountable right I mean when I send the announcement there is an automation to actually verify that the RC bits are deployed before I send an email and there is no way that email would go out if the bits are not already there and I guess we can do something like that to actually make sure that if we say well okay if A B needs to be backported as well so we can actually you know get some other automation around that it would actually verify that these kind of preconditions are all met that's something I can imagine but if we'll essentially say that this is non-trivial and care needs to be taken we again are back in relying that humans would do the right thing Yeah right but at least we highlighted that it's not trivial so that maybe we can go through comments and we can highlight that these issues require additional to be used before backporting Yeah I mean no objections from me Okay I'll probably just edit then Even if it's not consistent when somebody sees that we can at least refer to the discussion and maybe make sure that Homero was involved to review final pull request So quick question Is there any situation in which backporting would be considered non-trivial that is not related to other issues in JIRA meaning could we not just link issues and require that linked issues surface in backporting Yeah imagine we didn't have all these issues with Winston but we just had this issue which still requires a full version bump of Winston I would say that even if there was no report issues with Winston it would have been non-trivial backporting because there are four releases jump which includes something like eight to nine objective releases so even without report issues it would require additional care when doing such a backport But if you look at the original issue rather than what we discovered late and what went wrong in 2204.3 it introduced a regression through use of Winston 5.8 instead of 5.9 or 5.6 instead of 5.7 Yes this one and that could easily have been a linked issue in JIRA that basically says this issue costs the other one and I mean I'm not saying that the label is wrong but it seems like we introduce a less useful alternative to linking issues specifically for this purpose now maybe this is what we have to make it super visible but Oliver I don't know what the state of your backporting automation is or whether you do everything manually but I would say as soon as there's a linked issue if you have a script or something that would be surfaced and would need separate acknowledgement perhaps Right currently when it comes to linked issues we rely basically on the fact that they will spot this partially because not all the causes I mean all the link types seems to have a different semantics to what we're discussing right none of them explicitly says if A is backported B needs to be backported as well so I like this proposal but I would probably vote for having a different link type so it can be even machine processable that this only relies to backporting because otherwise yes we can obviously improve I mean I have this thing that lists all the important details for the candidates and what is backporting what's not how old is it and stuff like that and perhaps it definitely can be extended in a way that it says okay but if we backport this there are these other issues that was caused by that and that needs to be exported as well but basically that would require a manual again manual review of these issues to see because they can be in a they not everything that is linked with the causes link type does necessarily needs to be backported in my opinion right but you will need a manual review anyway even if you have the Montreal backporting label right the Montreal backporting label tells you hey something's going on here you need to be very careful when backporting this while the causes relationship says hey you need to be careful when backporting this look at this other issue to figure out what's going on so the second has a higher fidelity the only difference is you cannot do a nice search with the second one you can get the first one in a filter yeah I misspoke I understand I agree that this proposal is one step better than the non-trivial backporting label great so to clarify I'm not saying we shouldn't do non-trivial backporting as an additional indicator that might still be useful what I'm saying is that any tooling or process we apply during the backporting process should look not just for the label but also add linked issues now obviously the semantics are sometimes not great but even if you have a backport candidate that has a linked issue probably means you should take a look at that link relationship and the linked issue even if it's not the cause relationship specifically and if it turns out to not be a big deal and irrelevant for backporting then we've spent 20 seconds really well ensuring that there's nothing going on here yep I agree there's to be an improvement okay so I know that it is sure that issue links are created for cause populations during the issue triage is it what you said then now well the details are part of the linked types sorry what was the question so yeah I just added an action so is it what you said just create links so that somebody takes a look at them right there are two parts to this ideally we look at when we investigate issues we create the causal relationships that's the one half of it the other half is that Oliver or whoever else is doing the backporting considers these linked relationships as important data as potentially important metadata to inform how things should be backported or whether things should be backported because even LTS candidate is marked to cause an unresolved issue that's probably going to be a rejection right there okay so something like consider labels those are good points Daniel all relationships and non-trivial LTS backporting label yeah okay so yeah I'll see whether we could mark some more issues for non-trivial LTS backporting we definitely had one historically for example all the remoting backports and other things so maybe we have some for the next LTS but I doubt actually it's good if we have no such issues let's see okay should we move on we have something like 15 minutes left I'm sure we get more users involved in Jenkins LTS reviews candidate reviews and testing I believe reviews are more addressed with pull requests and other topics but testing is still a problem because all the issues we had they could have been discovered by RC testing and definitely didn't happen so I was thinking that if we could somehow improve visibility of release candidates we could get more users testing them and providing feedback that sounds possible or is it just wishful thinking I mean it can hurt to expose release candidates more publicly and not just rely on the limited audience of the dev list to get testing in the problem specifically with the issues we experienced here is they are for very specific setups and basically impossible to catch in what I would call testing setups one was extremely long configuration forms typically the cloud configuration even if you have a cloud configuration with five different cloud templates configured you won't hit this you need dozens many many dozens you need to click really many buttons to trigger this issue and the other issues were the wildcard certificates also probably not something that your test environment has and a very narrow and not recommended reverse proxy configuration so maybe I'm underestimating the fidelity of or complexity of test environments but every test environment like most production environments wouldn't catch it like 90% high 90% of production environments wouldn't catch it it's not even a test environment problem I don't think it's just edge cases and configuration that can be used well specifically in the reverse proxy host header case right if you configure the host headers in one order it's broken in the other order it's fine that was an annoying one and I mean in the totality of all JET users that still occurred plenty but I'm not sure how that would help here so obviously the plan but the idea is good and recommendable and we should be doing it but we shouldn't expect miracles that's what I'm trying to say here yeah I agree we had some regressions in previous versions not this one but yeah I guess some other regressions would be more feasible who knows maybe this configuration as code et cetera there will be no real difference between test and production environments in many companies but let's see yeah I want to see you made a form greater than 200k with configs code installed no way but yeah still code would mean that wouldn't catch it either we don't go in that way exactly but still in general it would be nice to promote it a bit maybe just for visibility download page maybe download page also I was thinking just about tech with github release because right now if you subscribe to Jenkins or Jenkins then you get notification and there is a number 914 so this is a number of github users who are guaranteed to receive notification and actually this number doesn't include ones who just watches releases so I'm not sure what is the exact number but if we just push it let's say through github releases additional notifications would go out yeah great yeah and yeah Twitter whatever it's quite straightforward somebody just needs to read this stuff but when it comes to when it comes to this we require it to be published this release at the time that we have started the backporting right so if you really would be putting there the release candidate as a release you'd have it as a pre-release button in github so you take the pre-release flag and mark it as an RC and it's like gives a very clear warning this is a beta or pre-release okay okay what's the question again whether we wait for the release candidate or go earlier I didn't understand the question no the Olex's suggestion was that we would publish a github release in order to get all the people that are subscribed to notifications so we informed them that there is a release candidate for them to test and my concern was that we would have to basically just create this release for the RC that he was suggesting using draft releases which obviously doesn't send notifications as a raw so yeah I guess with this alpha releases or whatever they call it pre-releases this would not be an issue you just click the button there and that's it okay yeah that makes sense so basically we merged the poll request that we've reviewed and approved and you deploy the release candidate send out emails and github we probably tweet about it so it's reasonable yeah so it's just the best effort to get highlights what team proposed and from downloads it makes sense because we already have incremental releases so we could just add a line or whatever reference and release candidates and other things but yeah right now it's so if somebody is willing to do that it would be great but now we don't so for example here on past releases so we could just release candidates could we move on to the next item I think that's interesting now okay so I guess we will stop here I'm sure that Jenkins LTSRC but for testing time frames don't have too much of a lot so I think it was one of the root causes for example for me personally it was definitely a root cause we had an effort to get a lot of changes towards the LCS baseline including system read permissions including manage permissions plus some web UI things plus back ports and other things and although we historically say that the releases are expected to be several weeks old so here the baseline release type is technically between two to five weeks what we de facto do now is take the most recent release which seems to be stable so if you had at least four weeks gap let's say then we wouldn't have overlap between the new LTS baseline and the previous one but right now everything was happening on the same week let me see if I understand your concerns I agree that we have sort of drifted off from these two to five weeks because there was a lot of pressure obviously the developers would like to have latest features in the LTS which I understand so even you would benefit if we would be more conservative in choosing the baselines to understand how that would change anything because the problem is we have the LTS cycle so that every four weeks we work so that it is on a four week cycle and activities related to LTS repeat every four weeks and it's worse for the point one LTS users that's a major version increase but the problem here was actually the point three preparation rather than the point one preparation so that doesn't matter so if we just said for example and you said four weeks if we just moved the LTS cutoff to four weeks earlier then you would have been busy with the LTS preparation getting your fixes into the upcoming baseline four weeks earlier and would not have had time to review and test LTS point two rather than LTS point three so I don't see how that would help anything it's not what I propose so right now the situation so the critical part is here between week two and week four so it's LTS testing and then here's release and next LTS baseline selection so if we say that we follow the documentation on our side then the weekly merge window would be some way here so during back porting or even before but not during the testing right now so what we have we have weekly merge rush week two and four just getting the last minute changes in place and at the same time we have LTS release but basically people don't spend too much time looking at that so for me what would be the preference that week zero is LTS back porting interviews then whatever weekly and then we have time for testing could you line up the week numbers with the LTS schedule meaning the weeks zero through 24 on the linked wiki page sorry so you are talking about it right yes so yeah we are here so this is dot three selection and this is that chosen so our main problem is that chosen but that merge window is somewhere between 20 and 24 now so very last weeks for dot three preparation instead of doing dot three preparation people spending time on weekly right so for me this is the problem and even if so if we strictly follow this part of the process it wouldn't have been a problem because weekly integration would be happening some way here which is probably collides with other things but it's better than last minutes merges when we have dot three testing the last LTS baseline so we are going to move on after that okay yeah I see it because we have basically two weeks of activity and two weeks of inactivity in each LTS cycle so if we did move it two weeks we would go into the inactive part of the LTS cycle so but realistically we have two options we implement what's actually documented and require LTS baselines to be slightly older no exceptions which means there cannot be a rush or the rush would be slightly earlier and we would then have also more confidence in the baseline or we introduce an additional delay between LTS baselines and the LTS calendar so that for example the activity from week 14 on is moved back two weeks so that basically the LTS point three is the LTS release that's valid for six weeks rather than just four or do you see further options here well we could obviously just try a dot four so that there is no pressure to integrate all fixes in dot three and if something goes wrong it's not our last LTS the release process but I think that the first step would be to just follow these two weeks advance as a default obviously we could reconsider that if there is something critical but if you just say that two weeks is our default state there is strong consensus in the community to improve the situation could we so when I think back to the past few years how LTS baselines selections work we basically look at releases and in many many times we basically said well I like the release from two weeks ago but if you look at the change log we released one week ago we're just fixing bugs that we want to be LTS candidates anyway so we can go one week more recent and at least I haven't checked but that's how in my mind we got to the very aggressive releases now obviously 2222 is an abomination of changes that would typically disqualify a baseline but there was strong demand to choose it so we did so perhaps we can say don't go newer than two weeks for any releases that contain notable features so if like the feature releases two weeks old but the two weeks afterwards only the bug fixes we can go with the newest release but if there are features in there we don't something along those lines maybe we don't have to note it down and make it the strict law of the Jenkins project to do it like that but I think that would be a reasonable rule of thumb to not go artificially far back and just increase the workload for LTS back porting into 0.1 because we need to pick up all of the bug fixes usually the conservative voice when it comes to the discussion and I sort of agree that there would be different reasons why to be more conservative when doing that I like the point that Daniel made that no exceptions because of features but perhaps exceptions because of bug fixes but at the same time this is the most commonly heard reason for people nominating something newer than that usually when people want us to choose aggressive LTS baseline is because of some features API or something so we would have to tell no to a fair deal of developers I'm just pointing it out I'm not saying that we should do that there could be a potential alternative if we really need to deliver for example 2.22 in this case I believe it was a good justification because features delivered are really important but for example we could have said that in the sense it's important we agree to go with that but let's say we delay the LTS cycle by 2 weeks so that we have more time for integration testing and feedback before shipping it so we intentionally shipped everything by 2 weeks so that there is no collision with dot3 would it make sense or would it create more confusion especially for users well it would probably be easier for whoever needs these new features in LTS because slipping by 2 weeks is probably better than slipping by 20 I don't think I mean it's just another irregularity to the around the timing of LTS we've done that once in a while usually for more serious reasons I didn't feel that strongly here because K is not here to tell us that it's very important to not to disappoint people on their schedule I don't consider that that's to be all that important that we release every 4 weeks I mean ultimately there's not a big difference between either approach the difference is just where do we introduce the delay because both approaches of having a later .1 in the schedule and choosing an older baseline mean that any features will be at least 6 weeks old as LTS .1 is released the difference is simply how old the release would be at the time that we decided another point there is let's presume that it was this discussion right so we choose 1 sorry 2.222 as LTS baseline where it was the latest release to actually postpone everything by 2 weeks would it not create a push on the core reviewers to actually be very conservative in the next week please it might so I think that it will be a natural force to become more conservative when it comes to the end of the niche window for new LTS but at the same time people will try to deliver changes and we are interested to help them so for me I would rather accept being more conservative after the LTS than before but so in this case I am confident that going with 2.222 was important and if it required some adjustments on my side for weekly software it would be fine I see I mean I don't I mean what Oleg is proposing that in case there would be a strong demand because of features to choose an aggressive baseline postponing LTS scheduled by 2 weeks is probably not a big deal for me Daniel how do you feel about that if that's a use case we want to support I mean to clarify do you mean as an exception or as a general new process as an exception and as a process we should double down on what is documented I believe yeah life happens sometimes features get delivered lately but we shouldn't set expectation that it's normal so for example this release we could have selected as an exception I delete the next baseline invest more time in testing spend time on those three but it's not something we should be doing every LTS baseline because people deliver last minute changes I mean I see essentially two options here one is we really try to enforce the minimum age rule again that is documented and that we've sort of given up recently and when we do we will have a better time also communicating that to contributors rather than recently we always went with the newest one somehow gets transformed to alright this is the exact weekly cut off for your feature together this only makes sense if we're confident that there won't be a lot of pressure on during the LTS baseline selection to go with the newest releases because one of the features was delayed or something the other option of for me it is still a possible option to have 14 week cycles with an extended RC period 4.1 back porting period either works that would make it really easy because we basically do the LTS baseline decision earlier in that schedule we could also as an alternative make the LTS baseline decision in general earlier so if you look at the LTS documentation currently we choose the the next LTS baseline on the release of the preceding point 3 if we decide 2 weeks earlier no that's 4 weeks 2 weeks earlier so just in week 10 or in week 22 when we publish the RC 4.3 that will also give our small breathing room for the new release automatically because the super late release just will not exist at this point and we cannot choose later obviously if we go this route and look at older LTS releases only we are back at this problem we had like 5 years ago 6 years ago when the LTS releases were really badly outdated and even the current LTS line just felt wrong if you were working on master so I think it makes sense for us to implement one measure but not more the question is which is the most appropriate one to do maybe it's question for Oliver because if we move baseline selection 2 weeks earlier basically backport tank can start earlier and parallel this test tank no no no I don't think that is anything like Daniel has suggested right just the baseline selection will be 2 weeks earlier and then it would continue right so everything would continue but basically if you have a bandwidth etc you could have started backporting 2 weeks earlier so basically just boils down to your capacity in this phase yeah I mean I can choose to do that in parallel but I don't necessarily have to and we would not necessarily have less of a time I actually liked it a bit more than making it a 14 week cycle because it would feel irregular making it one of its special to me this feels a bit easier there's one point I tried to make I mean you've pointed out that a lot of people was rushing to put their new features into a release that they believed that would be the LTS baseline before we have the meeting and choose the LTS baseline and then we sort of have to choose that as an LTS baseline so many people have featured in that I really believe that that's sort of the rule of problem I mean I'm a bit on a cynical side here but it seems like people are expecting or make a set expectations of what the LTS line would be and then they're trying to put their features in there so I understand they have a desire to actually get the features into the as soon LTS as they can of course but the thing is that it's actually very hard to go along with the desire of the stability because everybody would just expect their fixes into LTS baseline and we cannot really expect it to be all that stable so I just wanted to point out that these two things are not really working well together and if we just choose an LTS baseline sooner we would get more time for it to soak and issues to be discovered of course but again there's the thing that people will have an idea what it is and we can expect a spike of features into an LTS baseline because we are giving away an expectation of what the LTS baseline would be. So what you're saying is we should implement what's documented here basically saying two to five weeks old and just choose a safe baseline and once we don't once we end up ignoring the desire to get super late features in we do that once or twice and suddenly the features will arrive earlier because I mean this is not just the one way process right there's a feedback cycle here and the problem was that we're currently in the first it seems to me like we're in the currently first iteration of this feedback cycle or previously we went very aggressive because there were no notable features and a few bug fixes in the latest weekly when we did earlier decisions so it looked from the outside that we go with a very aggressive decision and now we see that for the first time what the result of that is when there was targeted development towards an upcoming LTS release now the question is how does that change the process for choosing the LTS baseline because contributors who work towards LTS will adapt accordingly yeah I mean you're right it's probably for the first time that there's so much of a demand on a feature side for choosing an LTS baseline and the fact is we didn't even have the first release out so we really don't have the empirical experience whether it was good or not perhaps going to blow up in our face and there will be a problem and we will learn from that but for now we don't know that well what we can say is that it is an unusually high risk LTS baseline choice yeah independent of whether there will actually be problems or not and I mean I'm saying that as one of the contributors who could make sure RFE into that weekly I'm fully aware of the problem I did not expect some of the other changes but yeah it was just too much I think in the end and every contribution sort of was okay in a way but it ended up being like more major changes for the entire release into this one EP yeah but for the record LTS itself was fine we had issues not with LTS but we appreciate created for DOT3 because everybody was trying to just get all these changes aligned we didn't spend enough time reviewing DOT3 yeah I get that so I don't really see that I mean do we feel that we have enough data points to tell that this is a consistent pattern or was a pattern for the last LTS only well I think it's mostly for the last LTS we had the cases before when we delivering features definitely not on this scale so if we agree to postpone the decision and get more data I am fine but for me it would be important to see what would be our recommendation with these integrals maybe just removing typically or whatever to have amongst this much windows or maybe invert this structure to provide more insights for contributors because here for example we could say that five weeks before the decision time it's basically the end of your much window I believe it's what's written there but it's not explicit and definitely our process didn't follow that so maybe if we just make it more explicit it would help contributors because they cannot do more planning accordingly but I talked to one or two people sort of involved in the development here and they were unaware that this documentation even existed so however they got their information would need to be changed as well one way just put additional row here which basically draws much windows somehow so for example the Y is chosen and you can say that your safe image window ends here then there is one that sells this two weeks you can probably get into release and then no way or something like that yeah it doesn't help I mean right now it doesn't help that we're working in weak increments I'm not sure I want to go more detailed there because another problem is or rather the main problem is that the RC testing period and the development period overlapped and if we can basically say if we annotate week 10 with Y release likely Y release could be good enough the thing is I'm not sure it makes sense to for us to promote LTS schedule driven development ideally features would arrive when they arrive and get picked up in LTS in regular intervals so I'm not sure we should make this much more explicit as project documentation and I mean if someone tells us I have this feature and I want it to be in the next LTS baseline we can explain what this one sentence on this page of documentation means for them I'm not sure we want to have you know LTS driven development basically be supported beyond that I agree that we should probably be more fluent than independent so it doesn't interfere with each other that much but at the same time with all access it is very hard for the individual contributors to manage actually matters to me the thing that makes most sense to me is the suggestion to actually do the decision for the next baseline two weeks early we would immediately get two more weeks of soaking before the .1 is out and people can if they want to voice data was their preference to actually choose the latest and greatest I mean we can continue in the trend of making this aggressive baseline choices because it would be two weeks later when it actually arrives now we could try that or we could just start discussion earlier I think if it works for you Oliver I'm happy to go with that Daniel I think that makes sense my proposal here in terms of implementing this would be that you submit a pull request to Jenkins IO that modifies the LTS documentation accordingly basically write stuff into the table and rewrite the sentence that says and link it in the discussion on the dev list and let's get some more feedback there I think that's the most pragmatic approach and once we have consensus or nobody brings up any major concerns that seem relevant we can we can proceed okay can you guys please hit me with an action item on this I would love to do that my Lenovo display just died on me so I don't see a thing sorry no worries I think it was basically about more we already seriously over time do we want to continue or do we want to to move anything to the next meeting call to a discussion I believe we discussed the most critical items yeah I tend to agree so I think or another meeting I agreed to drop today I guess I don't have it in front of me so I don't know how many things that that's that but I guess you know reiterating in two weeks would be okay perhaps you would have a feedback from the pull request so we can actually there might be more to discuss okay so just a separation point and so we noted a bunch of action items today let's just deliver on them and I believe that it could already deliver improvements in the process so thanks Daniel and thanks to everyone for providing feedback and I think we can iterate on that and improve guys thank you very much I really appreciate the feedback and the ideas okay then so and see you later yeah bye guys bye