 Hello, everyone, this is Jenkins Platform Sieg d72222. Today, we have Mark Wait, Stefan Merle, and Damien Duportal. Welcome. We have quite a few items in the agenda, so I won't maybe do the agenda of the agenda, but we have a few open-action items to start with. And once in a while, something progresses for the benefit of everybody. And today, we have such a thing with Mark. The support page for web containers similar to Windows and Linux support pages was something you agreed to do, I think, but somebody had to do it. And fortunately, Basil Crow did it, and it's already online, up and running. We can see it on Jenkins.io. Right, yeah, and thanks to Basel very, very much. That was a long-running thing. Not only did he add this page, he also added two automated tests that check that, so we have lots of automated tests that check that level one servlet container works. But he added two automated tests, one to check that the Tomcat 9 servlet at level two continues to work, at least minimally, and one for Wildfly 26 that it continues to work. So what used to be JBoss? So we've got automation now in the packaging repository that checks level one very heavily, and checks level two, at least with a basic check to be sure that things start and are reasonably behaved at startup. That's super cool. It's even better than what we expected. So yeah, congrats, Basil, once more, great work. That's cool. So one thing that we'll be able to remove from the agenda next time, that's super cool. David, now, sorry about that. I know already the answer, but you get time to check on the Docker image download statistics per platform or version. So not in detail yet, but the action we did since last time is that we check for Arc Linux, at least, because we have Jenkins slash agents, Arc Linux declination, we weren't sure if it was used or not. But no one volunteered to help or since one year, one year and a half to help on this one. It was never published as the inbound agent. So that means either people are using it and they don't require the inbound agent script or they use that image differently or no one use it. And the statistic we saw showed that there is almost no usage of that image at all. The pool downloads, at least in the past three months, are less than 20 compared to the... Several millions, yeah. Several millions of the agents. So that one will be a way to remove on the agents. That would be an argument to say, okay, for now, we don't remove it. I'm not sure how to communicate that because if anyone come back and complains about I need Arc Linux, then that means we have at least one person using it. And for the case of the agents, for me, it's a good reason for adding more and more. Please don't quote me for the controller image, which is an isotopic. But for the agent, for me, people need to install their tools. So they need to have an operating system as close as their habits because the goal is to customize it most of the time. So for me, it makes sense to have different operating system for the agent images. But that's just an opinion. While for the controller, when you use Docker for running Jenkins controller, it means you don't care the operating system of the Jenkins controller. You care, you just want to run it so it works. And you care about the command line or tools that we provide as a project to customize plugins or anyone. But outside this, I don't see a reason for having multiple version. For me, Ubuntu, even to Alpine, but even that for me, it's absolutely the opposite spirit of packaging with Docker. In the case of the agent, it depends on the kind of tooling you have. So that's why. But right now, we don't have a lot of download of the Arp Linux, so we shouldn't remove it. Globally, the statistics of downloads are only available to the administrator of the Docker Hub. And it could be interesting to see if we can have a way to extract and anonymize some of this data or see what can be public. Maybe they already provide some public metrics though that could help us to make decision without requiring an administrator to extract CSV. Thank you, Damian. Okay, so I'm strongly in favor of deleting the Arp Linux image anywhere, especially on controllers, but agents as well. I just don't see the benefit of it. And even if it had hundreds of users, I don't think we can reasonably continue to maintain it. Now, in terms of where to announce it, in the past, what we've done is written a blog post that announced the deprecation of a particular container image. We eventually are going to have to deprecate the CentOS 7 image, for instance, in part because I hate the Git version that's inside of it. There is no rush, no need for arguments. CentOS 7 is not supported anymore. By no one, by no community, no Red Hat since almost one year. Oh, really? Yeah, I can find the Red Hat. It's because the Red Hat shift, they're all downstream to upstream or they inverted their flow of distribution. So either you go to Rock Linux 8, but it has been deprecated, so it's, yeah. And yet Red Hat and Oracle are still shipping and supporting their version of 7, but it's not CentOS. Exactly, CentOS used to be the community and then Red Hat was built on top. Now it's the other way around. What they call CentOS Stream is downstream of their own proprietary distribution. And they shifted with the version 8 one year and a half, I think. So one year and a half, it was too soon because too much people were relying on CentOS 7. But now we can check the statistic as well, but CentOS 7, alas, the statistic won't be enough because CentOS 7, a lot of people have this old version running somewhere on the hidden machine, which is really scary. But us, we should not delete the image but stop having new version of the image and remove them from the source code. Yeah, I'd like to tell you what was that, even if we don't vote. Sorry, go ahead, Mark. So I'm struggling with CentOS end of support because for instance, on the Google Compute Engine page, they declare that, okay, in December 2020, the CentOS community, or maybe here I'm gonna post, I'll post this page and into the comments and if you can open it into the chat and if you can open that page, let's see, here we go, give it a debris that they add. So here they say that CentOS 7's end of life is not until June 30, 2024. Now that's not CentOS community. Now, CentOS 8 is already end of life and so that's already something that we may want to, I wasn't aware of that. This tells me something new and so if we've got a CentOS 8 image, it's already beyond end of life and we should have already dropped support for it. So this is a good excuse to declare, I'll have to check to see if we have any CentOS 8 images but if we do, they are dead, right? Because CentOS 8 is not supported and we don't support things that the operating system vendor does not support. It's cool. And even one year ago, it was already a pain in the neck to work with CentOS 7. For example, I had to recompile Git and Curl to get a recent version. With the packages, it was not possible to have something reliable and pretty recent. So yeah. That's true. I thought that CentOS 7 was already deprecated. So that might have changed. But yes, you are correct. We still have one year and seven months. So we have to keep using CentOS 7 or at least yeah. The discussion is again, do we need, why someone needs CentOS 7 image of a controller? Right, right. And I think that is the, Damien, I think you've got the best point of all there. We should be telling people to get off CentOS 7 as a Docker controller because Docker abstracts it, they should pick a better operating system for their controller base image. CentOS 7 is a really lousy choice. So I think you're right. We want to, so maybe what we need to put here is an action item to revisit the platform support materials to be sure that no containers are delivered with CentOS 8 or we declare they're going to be removed promptly because CentOS 8 is not supported. Yep, and additionally, it's a smell. If someone says, I absolutely need CentOS 7, I've seen two main cases. The first one is the security team need to scan whatever binary or distribution and they try to add agent inside the image, which is an anti-pattern. They need the agent to run next on the same container engine and scan for the image binary or the container binaries while it's working. So that's the thing that we have to say. No, no, we don't want to support edge case. You don't need to add another process inside the image. You need the process next. It's a good practice in production. The second, let's say major use case I saw is people extend the Jenkins image for most of the time because they use it as an agent as well as a controller. And that one must die. Right, that's an easy answer then. Stop doing that. I like that. However, there are still legit use case to take in accounts. In order to use some plugins, such as the cloud plugins, you might need to install additional tools, such as the AWS command line, Azure command line. But each of the tools can be installed on the Ubuntu image. So for these cases, unless there is something really, really, really specific, meaning edge case, either people should volunteer to maintain the image or we will drop it because that's the key. There is no problem of using and having all these images. If someone volunteers, if there are no volunteers, then we remove it because it's already hard to maintain Windows, Debian and Alpine. It's already some work. Right. Nice. Oh, thank you. Sorry, I was too. Like we opened the, how would we say, a can warm, a warm up, a lot of kind of worms, yeah. Pandora box. Anyhow, the next one is also for you Damian. Fixed Docker agent deployment issues. May I say, okay, unintended. That action item is done. Fixed. Congrats. So the last culprit was a really, really, really hard setup inside the GitHub repository. And I wasn't admin. I wasn't in maintenance. I wasn't able to sew these elements. And thanks to Tim Iacom who cooked the issue, we add webbooks and SSH deploy keys to the Docker Hub. So even if we disabled the Docker Hub automatic builds, there were some Docker Cloud, which is the legacy Docker build system, which is backed by Jenkins, by the way. And there were that ghost system that keep continuing receiving webbooks and building images and then pushing these images because it was allowed through the SSH key. So by removing both SSH key and webbooks on the four repositories or ensuring they weren't present at all, we were able to cook the latest deployment issue, I hope. On trusted CI, we haven't seen the error we saw one year ago where some builds that should not be built, but they were still triggered based on tags. So that one hasn't been seen since eight months. So we should be okay. So I consider that subject close. Maybe it can reopen, but yeah. No, we cook the valid culprit there. That was not easy. Congrats for finishing. Solving this issue. Nice. The next one is still for you, Damian. So open issue for merging Docker. Oh, sorry, go ahead. I didn't have time to walk on it so you can move on. No problem. It's just making a status. That's okay. I know so many things to do. So little time. Something I should have told you maybe earlier. I discovered yesterday some tags for input agents still sports a preview label for GDK 17. So we have to do something about that. Yeah, I can see your face. I made the same one when I discovered that. Even on the current images. Oh wow. It was built five days ago with those tags. So yeah, pretty strange. Okay, go to check. Yeah, me too. We can pair on that if ever you want to. Stefan, I don't have the link. Don't know why. Open issue on Jenkins Infra app desk about adding GDK 19 to CI Jenkins IO. That issue is almost done. It's on the way of finishing some update and some following issues but the main one is up and running. That's typical. May I ask you to modify the documentation later on with the right link to the issue, please? Yes. Oh, thanks a lot. Damian, once again, it's you, winner of the day. So monthly base OS update and autumn cleanup. That's what kept you. No, that's not the only thing that kept you busy but you worked a lot on that lately. What can we say? Just autumn cleanup. So kind of things we do like blowing the dead leaves. It's a season. We already did it so you can remove that sub item. It's not anymore. Sorry, we're done with the open action items. That's what has been done now. Sorry, I should have changed the title, made a transition, played a jingle or something. Sorry, so it has been done and it's working. So latest tag being overridden. As we said, that one is closed now. It has been done. Automatic tracking of remoting and GDK version. So we have released at least six version with that system and one GDK major version change, no, so patch change. So that one is okay. Please note that automatic tracking of GDK is not implemented on the SSH agent yet. But it's only in agent and inbound agent. Which is already cool. Yes, a lot of fixes as well. We broke some things. We had it also time zone support, which was missing on the agents. Yes, and there were so new contribution that was waiting since one year on Windows images that are now supporting the Jenkins Java OPTS and Geronman variable. Be careful for the quoting because power shell interpolate variables. But that allows you to fine tune and to specify GVM option to your agent process. If you want to limit the amount of memory on Windows agent, for instance. Cool, thank you. Now, experience with Windows 2022 working with Jenkins infrastructure to build these images. Is it you, Hervé? Or not? I don't know. That's the topical wall infrateam in December. Okay, so I see. Before Christmas as a gift. Oh, so kind of yours. Experience with GDK 19, EnterR2 to EnterR4, transition complete. I think we already saw that in the last meeting that we could get rid of that, I guess. Yeah, you can delete the item. It's... Thanks a lot. It's done. Next, Stefan Krithid had the ticket to another progress on the Java 19 support for developers. So it's progressing. Which milestone is it for next week? Or I don't know. That's the link I sent you for or whatever. It's almost done. Okay. That's the same issue in fact. Mm-hmm. That's why it seems something I already heard of. Let me check where I am. I'm kind of lost in that big document. Yeah, so I can get rid of that because you already put the link earlier in the document. Now the container image deprecation, why is it there? We should have put that in the open action item. Maybe I'll move that later on. We discussed a little bit the other day about that, but we haven't done anything yet, I guess, or can you prove me wrong? I've done nothing on this. The ideas are good. I like the ideas, but I've done nothing on it. Okay, so I'll move that in the open action items. Now, again, something for you, Damian. I already know the answer, but container repository management for Jenkins agents. So the progress towards unifying repositories hasn't changed much lately. Am I right? No, I need to write down. So I've confirmed that the Jenkins slash agent image is widely used by people using the Docker cloud plugin. That plugin allows free mode for the agent. SSH inbound, still named GNLP on that plugin because the plugin is more as to be adopted. And there is a third mode that seems really, really weird. We had an issue of a user that say it left a zombie process on the Linux machine. It seems that it start a container and attach to the container to get information. So that means it create a virtual TTY non-interactive to communicate between Jenkins and the container on the same machine. That doesn't sound a good idea. And of course it create a lot of issues in term of lifecycle. But the documentation of that plugin recommend to use Jenkins slash agent for most of these use cases. So I've counted at least eight users who open issues in the past three weeks that said, oh, we are using Jenkins slash agents. And so I've started to communicate to this user if they were volunteer. Two of them, they realized free, including someone in IRC, they realized that the difference between Jenkins agents and Jenkins inbound is only a shell script. So right now I'm trying to think about, okay, Jenkins slash agent should be used as the, let's say the common foundation for both inbound and SSH, which means there will be a major change that might break the use cases of people on Jenkins agent is we should not add the remoting agent on the foundational image because you don't want the remoting agent.jar file inside SSH. Yeah, I see. It's only for inbound agent. So you, do I understand correctly? The actual Docker agent is not supposed to be used as is. Yep, exactly. Okay, and the script, the shell script that is the main difference between the Docker agent and the inbound agent, what does it do? What does it contain? So it implements the logic that you path either parameters or environment variable to tell the inbound agent to which controller to connect, what is the secrets and a few other things such as the GDK to use for the agent process, the Java OPS to add if you want to control the memory or GVM process. So it's only a way to make it easier. So any plugins such as the Kubernetes plugin or anyone starting permanent agents with an automated system could control the behavior using environment variable that are the same and that are well documented. I see. Thank you. So the unifying of all these three repositories is a long-term job. It won't be done tomorrow. So don't expect to see one repo as a first of January gift. That won't happen. Thanks a lot, Damian. Now we quarreled Java 11. Yeah, we're still talking about Java 11, even if GDK 17 works for Jenkins. So Mark, the build toolchain changes arrive in parent-pum 4.53. Can you tell us more? Yeah, so as part of the transition, Jenkins beginning with 2.357, middle of 2022, switch to require Java 11 for Jenkins Core. We did the same thing in 2.361.1 for the long-term support release. So think of that as step two. The next steps then are to switch more and more of the toolchain so that it requires Java 11. Big portions of the toolchain had to continue supporting Java 8 so that you could run or build and test plugins with older versions. Now the transition point, the tipping point is when Jenkins 2.361.1 or greater is your minimum version of Jenkins, you must use Java 11. And the parent-pum 4.53 just released a few days ago says that now if you use that parent-pum, you must also use Jenkins 2.361.1 or newer as your minimum version. So is all that you just cited reflected in the current documentation? It is, and that document that you just opened actually gives even more information about it. So it doesn't describe parent-pum versions because that's an implementation detail, but it does describe there are choices you can make and options when you choose a choice for your Jenkins-recommended version. If you scroll up to the top of the page, we can look at some of the guiding principles. So one of the guiding principles is building against recent Jenkins versions, let you use recent core and recent APIs. So Java 11, for example. Next line down, do not use versions that are no longer supported by the update center. The update center only tracks about 12 to 15 months worth of releases. If your minimum Jenkins version as a plugin is older than about 12 months, it's time to increase your Jenkins minimum version. And the reason for that is hiding in this fourth or fifth item down, prefer releases that one more down. That one, few or no implied plugin dependencies. So the story here is that when we split a plugin out from core, in order to retain compatibility, we declare an implied dependency on the newly split plugin. That implied dependency means that any plugin dependent on a Jenkins version before that release, before that Jenkins release, will now implicitly depend on the new split plugin. So in the example here, the WMI Windows agent plugin has been deprecated. The technology on which it's based is shutting down and not available, it makes no sense for that plugin to continue. However, that means any plugin with a Jenkins minimum version of 1.576, I think it is, or older, has an implied dependency on that deprecated thing. I see. Now I understand the message I got lately with that WMI, okay. Okay, it's clearer now. Thank you, Mark. Now Java 17 supports in Jenkins. So that's part of what you're gonna do, the infra team in December for your controllers. And maybe on CI Jenkins IO in January, 2023. Is that correct? It's a good question. Haven't too early. Yeah, I wasn't, I'm not sure we're gonna have capacity to do that, Damian. Do you think that for me, I would accept delaying this one in favor of the JFrog work and in favor of the other work. I'm okay if Java 17 isn't on CI.Jenkins.IO for several months yet, at least for me. I'm open to your opinions and the opinions of others, but I think the work on artifactory bandwidth is much more important. But that is true, but the effort to use JDK 17 is now close to known. Because the work that first survey and then Stefan did on the agent contents, we now finally control the JDK version use for the agent, which is different on all of our images of the default JDK the developer will want. So the road for that one, the last one has been fixed by Stefan today. So the effort should not be changing our priorities. However, for CI Jenkins IO, that could be interesting to wait for January. What I had in mind was to start with at least in Fra CI, our internal machine and use it for one to week and then start thinking about release CI.Jenkins.IO itself. That would be a weekly. Yes, a weekly, of course. The same. Oh, okay. But that means that I will have to rebuild again the road for agents with JDK 17 as the default launching one. Yes, true. Okay. I got one last new item. Yeah, go ahead. Adding an issue. It's quite a critical one. It's been one year almost that the controller Docker images of Jenkins are not up to date, both LTS and weekly images. I've added the issue that is associated to that problem. So whatever Jenkins image of a controller on Windows that you pull, whether it's Windows Server Core, Nano Server, whatever the nation, you will always end with the weekly two dot three, five, six, which is almost one year old. So if you are using Windows container and want to run Jenkins controller, which is already quite the age of Windows, the quite the age case, then please manifest yourself on that image. The person will raise the image volunteer to help. So I'm gonna try to give information for that person to solve the issue. But that mean we need help on that part. Otherwise that mean maybe no one is using that declination of the image. So that's raise the question, should we keep maintaining Windows image for Jenkins controller? I don't have any answer. I'm just raising the question for the heath of the community. Yeah, and I think I've got an answer. I think we should propose to end its life because, okay, Damien, it was June of 2022 when we released two dot three, 56 and there's not been an update since then. And no one has raised an issue except this individual user. Yep. There are two users within. No, one. Sorry. Okay, do you have the numbers, the statistic for all the times it has been downloaded or not? That's a good question. Same for our clinics, I'm gonna check. 22? Yeah, we have to check that. I'm opening to have a number but let's not take a decision. Yep. Let's see here. No, but it seems like it's an excuse though or it's a reason for us to investigate to see if, to answer the question, is this image valuable enough for the community for us to continue maintaining it? Absolutely. When you want to try thinking, when you want to try thinking for the first time on Windows, is the tutorial are recommending to use a Docker image or? No, no, and if we do recommend the Docker image, we recommend the Linux Docker image, not the Windows Docker container image. I haven't seen anything regarding Windows in the startup tutorials and so on. It's, of course, it's telling us about Docker in a too complicated way, by the way, but no reference to Windows. I know it's supposed, I was supposed to do something about that. We should have put that in the action items, open action, I was supposed to simplify the documentation, but it will come. Stay tuned before I get retired, I promise. Oh, that was nice. Thanks a lot for this new item, Damien. Does anyone have any other question or item to address? No? If that's perfect, thanks a lot for your time, folks. It was nice to have all of you in this meeting. The recording should be available on YouTube from 24 to 48 hours in the base case and see you, Mark. I think it's the last one of the year or no, we still have one platform Sieg, two weeks from now, I guess. So, yeah, it's too early to say happy new year and it will be next time. Maybe I had put a special hat for Christmas season. But it is not too early to announce that the meeting that would be normally held on December 30th is canceled. Yeah, it is. She's reasonable to say people will be busy with the turkey or whatever instead of coming in here talking about Docker. I don't know. Anyhow, once again, thanks a lot. See you next time, hopefully, and have a great rest of the week. Bye-bye. Thank you, bye-bye. Bye-bye. Bye.