 Hi everybody, I guess it's recording. So let's start this new Jenkins infrastructure meeting. Today we have multiple things to the agenda. In fact, we have quite a lot of things to the agenda. The first thing is regarding the mirror infrastructure, the growth states. So I contacted the serverium.com mirror maintainer and he told me that they had some maintenance plan with the red. So I'm waiting for some feedback from there. So until then the mirror remain disabled. Still on the mirror topics, the ham charts to deploy mirrors is not ready and it's working. So right now it's deployed on, I think, mirrors that are Azure that mirror the Jenkins RIO. So basically you just run a helm, helm install with a link to the ham charts and then it will automatically deploy a nursing server and HTTP server and a cron job that will regularly, like every five minutes synchronized with one mirror provided by parameter. Right now we are targeting the OCOSL mirror, but yeah, this is something that can be easily changed in the future. What I would like to have is, I would like to publish that and version the mirror. So we can start asking contributors to install that mirror. If they don't want to run on Kubernetes, I mean, that's fine as well. Everything is explained in the ham chart so they can easily adapt the service. But basically we start to need more mirrors. I had a look to the map and it would be useful basically to have more. So I would like to, I'm planning to start working on a blog post or something like that in the weeks. But basically almost everything is there. Regarding, yeah, that's basically any question regarding that topic. Yeah, sounds like we can continue then. The second topic is about maintenance on infra.ci and release.ci. So basically we want to switch to the ham. So we are using the ham chart for the Jenkins ham charts in our infrastructure. And we would like to switch to version three. And because there are some breaking changes in the configuration, we need to plan that upgrade carefully. We shouldn't have major issues but just to be ready to have those two environments down considering that we have, yes? I was just gonna add, did you want to possibly delay that by a day based on Daniel's comment about delaying the weekly release? Yeah, that was exactly my point. Okay, sorry. Yeah, so basically we initially planned to do that upgrade tomorrow afternoon. But because of the weekly release that is delayed right now, we'll probably do that on a Thursday afternoon. Yeah, I still have to send an email on the main list. It should not affect users obviously because it's only concerned release.ci and infra.ci. But yeah, still because we'll have to delay that upgrade. Any question? Next topic is regarding ci, the Jenkins.io. I was looking at that instance today and I was thinking couldn't we use the GitHub label filter to filter to stop building every PR for Jenkins core. So this is something that Garrett introduced on infra.ci. So using the GitHub label filter plugin. And so basically is if you specify a label on a PR, we stop building that PR. And I think it could be useful for still PRs on the Jenkins.io on the core, on the main KTRIP history. Before I send an email on the dev mailing list, any suggestions here? Oleg, Mark. I think it sounds like a good idea. I know Tyler has been of the opinion, hey, let's just close the PRs, but the behavior has not been to close PRs. Therefore being able to mark them, this should not be built, at least accommodates current behaviors. So yeah, so we had different opinion about closing or not closing PRs until now we never closed PR. So that's why I'm suggesting this idea. The idea is definitely to stop building, to stop run jobs on PR that don't have activity at the moment. So you want to measure activity or maybe we could just stop building everything which is not related to LCS? Interesting, how would we do that? I think that's a keen idea. Is there a way to decide which things are related? Yes, so for LCS, there is an Intel CS label. We can just use that. At the same time, I'm not sure whether it provides the desired behavior. So does that mean that we would stop testing PR that target directly releases? Well, we have staging. So we are talking about staging now, right? Yes. Yeah, it's not relevant. Okay. We could use a stale bot and in stale bot just disable the closing functionality so that it just marks a pull request as stale and then you use this label to disable the builds. Yeah, so we would use the stale bots and the GitHub label filter. The stale bot would apply the labels and the plugin would not trigger the job. Yeah, okay. So the most straightforward approach? Yeah, I totally agree with that. I like the use of stale bot, I think that's really nice. So forgive my not knowing stale, but it's a bot that will look for if there's been no activity on a pull request in a certain period, it then marks it as stale? It's not just a time-based thing. It's not saying a PR has been open for six months, therefore we close it. It's rather six months and idle for some period. Yes. So it has support for marking the things stale and support for the closing of the things. So the latter feature would try to advise against it. But yeah, stale label could be used. I guess the periods still need to be defined, especially for LTS PRs. But yeah, that would be already a nice improvement and we would stop building again and again and again. So I sense, yep. Olivier, you see indications on ci.jankens.io that rebuilds of Jenkins core PRs are a significant burden on the processing. The ones I had seen were acceptance test harness, which was quite different. You're seeing even core PR that's 60 or so that are open right now, they are a real burden on ci.jankens.io. So I'm looking right now at ci.jankens.io and those PRs are gone, but several hours ago, I saw that we were building ring jobs for the PRs on Jenkins core. Okay. Thank you. That's why I was in the investigation some option. Excellent. And what you're proposing sounds great to me. Yeah. Also the same approach could be applied to plugins by default. Yep. I'm not sure about enabling stale bot globally, though may make sense. So basically, yeah, I have to see, because obviously using that label approach for Jenkins core is quite easy, because we don't have a lot of configuration to change. If we decide to go with the every plugin for every plugin, maybe it will be easier to first convert ci.jankens.io to a G-CASK configuration before doing that. So we already have the G-CASK configuration for in Friday CI. Garrett worked on that over the past weeks and it's working well right now. Is this, I didn't think the G-CASK configuration was deployed yet. It's defined and being experimented with, but not yet deployed. Is that correct? Yes. Okay, thanks. So it's not, so sorry, it's not, so we have component for ci.jankens.io. We did some tests. So basically team Jacob did some work several months ago. Maybe almost a year ago now. We already have G-CASK configuration for a lot of components, but maybe not for everything related to ci.jankens.io. Next topic is about building a custom Jenkins Tucker image with plugin installed. So this is something that we've been talking for quite a long time. We have quite a lot of components in place because of the work done by Damian to build Tucker images, to simplify the build and test Tucker images. I think would be nice to move forward this thing as well. Garrett made a small CLI to automatically update the plugins that Jenkins, the plugins that TXT file. So I don't know if you can do a quick demo of that. So I put the link to Garrett's GitHub repository. That's pretty simple and pretty efficient. Yeah, I'm not sure if I don't have quite a demo, but I have, I can talk about it briefly. It tries to, it looks at update center and tries to pull in what it believes are the next or the updated plugins. It seems to work quite nicely. Obviously it's quite dependent on the version of Jenkins to run it. So you can pass in a path to the Docker file and it will try and best guess the version of Jenkins if you're using LTS or not using LTS and you've got a version in there, it can extract that out. So it knows exactly what URL to call and just updates that. I'm working on a, just like a POC repo at the moment to try and simulate like a sort of dependable style update with a GitHub action to try and run it. Which I should have finished quite shortly. Yeah. At some point there was, I'm not sure, renovate both supported plugins, TXT. Yeah. I'll call the client. Basically you implement it as an action. So do you see my terminal locally? My screen? Yes. So basically the CLI that Garrett made is quite simple. And so you can specify the plugin that you want to update. You specify the Jenkins version that you are targeting and based on that it queries the updates and our URL and then show you what are the plugin version available for that specific Jenkins version. And then obviously we can commit that file to the GitHub repository and so on and be new images. So it's, I mean, the binary is quite small. It's like 11 megabytes and it could be easily integrated in our workflow. Yeah. So this tool basically takes versions from update center. Yes. Maybe later you could use version which takes the data from the lot of materials. I think that you could what? You could take information from? From the bill of materials. I mean Jenkins plugin bomb when the plugin is present there. But yeah, it's rather nice to have. I've added a command called check which should go display any that we know about. That seems to work. So if you run, rather than you see update, if you're a UC check with the same arguments, it providing you've got a fairly recent version of that binary. Yeah. It tries to match everything in the plugins.txt against the vulnerabilities that come as part of update center. I think I have the logic correct. So the vulnerabilities report that it's doing is based on data from update center. Yeah. Okay, thanks. Yeah, the update center provides a, it provides like a list of aspects, vulnerabilities, some plugins, and then it gives you a couple of regular expressions and like a last version. Well, I think I'm interpreting that. It had a few issues with the go lang and regular expressions because that I think it's a slightly different format but seems to be working. So if I, just to be sure, if I define a plugin that has known security issues, pick one and it, I run the check option, it will give me a warning text saying, hey, you have specified a plugin which has known security issues. Yes, yeah. Okay. And I'll give you a link to the like, it is in tabular format. So it gives you the sort of the ID, the plugin and the link to the security advisory. If we know about it. So basically. What it doesn't do at the moment is fail. I'm gonna add an option to make it like, actually like exit, exit masterly if it does that. So you could put it easily into a CI system. Sorry, Oleg, what were you gonna say? Yeah, I just wanted to say that it basically implements some bits of plugin installation manager. Yeah. Well, and is this kind of thing something that long-term we might consider putting into plugin installation manager? I know it would be different because it's Java instead of Golang, but Oleg, you noted that there is, there's some potential for this to be needed in our Docker image context. Well, plugin installation manager already does similar checks. It can even generate a plugin stick stick for you. And in theory, it can increment update plugins sticks. So the problem of it, yeah. Separate tool in a separate language. And you also need the Java and you need to be running the tool from the Jenkins instance. Not necessarily. When I edit the flag, we should also support the Jenkins version. So similar to how God did it. So it can also be executed in a standalone mode right now. But yeah, this is a Java tool. It's heavy, so. Right, okay. Okay, thanks for that. The next topic is about the weekly release as we already mentioned, it's the weekly release is slightly delayed. Will probably happen tomorrow. But yeah, that's next topic. The LTS release mark. I think we put that to the agenda if you want to. I did. It just is a warning that some of the regressions that are included in 2.263.2 may be significant enough that we'll want to do an out of order release. It's not a known yet. This is definitely not a, oh yes, absolutely. It's rather just a hint that we may, over the coming days or weeks, decide that there is enough in the, for what will be in 2.276 that we want to include those into an out of order LTS release. Okay. And finally, the last topic, which is about the contributor summits. So FOSDEM is in two weeks, three weeks in fact. And we aren't interested to organize a small contributor summit, basically a virtual contributor summit. Maybe Marky can add some more information here. Yeah, so the idea is that we'd like to take advantage of virtual to do a combined meeting session first where 60 to 90 minutes, probably in the European late afternoon or think sometimes between three and 6 p.m. European time, so Switzerland time and Belgian time, roughly there to do an initial startup session where we would invite everyone who will be part of it. Then we break out into separate sessions with time zones focused on meeting the needs of the people in that subgroup. Those subgroups might include documentation. I think they should include infrastructure and platforms and several other top pipeline, I think would be a good one. Of course, it depends which subgroups actually form depends on who attends the session and who's ready to assist. Then after those subgroup sessions happen and those could happen over a period of a day or two, then we would do another concluding session, bring everyone together for a very fast review of results. My intent is to send a proposal for this today out to various places and invite people's comments. Likely dates, oh, go ahead. Now I just wanted to highlight that the reason that we bring the, the reason why we want to do the contributor submit a run for them is because we usually took advantages of the fact that everybody was at first them or at least a big amount of people. The thing is because this time it's a virtual event, we want to take the opportunity to gather together but we don't have constraints, physical constraints. So we can split the event on multiple days. So I think that's why we wanted to identify sessions and obviously infrastructure will definitely be part of the topics there, but that's basically what I wanted to add. We cover all the different topic of the agenda. So any last thing you want to bring here? I think we are good then. Thanks everybody. Thanks for your time and see you later. Bye-bye.