 Hello everyone, this is Jenkins platform sig meeting. We're on August 29th, yes, of 2023. And today we have Mark Wait, Kevin Mosul, Damian DePoto maybe, and Kenneth Salano. Sorry if I butcher your name. So on the agenda, we have, as always, our open action items, mostly about darker images. We have some ongoing work. Hey, Damian is there. Cool, because he has a lot of things to tell us about what is going on. We have, oh, a new subject. Thank you Mark, about Java 11, 17, and 21 with Jenkins. If we have time too, we'll talk about what has been done recently on the darker images and we'll finish with new topics. Maybe we should put the new topics before this one. Anyhow, let's get started. On the open action items, as always, so the Jenkins blue ocean image is still deprecated and will always be, I guess. We haven't announced yet, officially the deprecation of the image. And even if we don't build it anymore, correct me if I'm wrong, the plugins still leave and get updated. By updated, I guess it's only bug fixes, but they are still there and updated. What else? Oh, okay, let's go with the ongoing work, if you don't mind. It's very concise, what I wrote, image republication. Damian, you're the one I was targeting with this small title. Could you please tell us a little bit more about that? Yes. So we had an unexpected, I could say, buggy view of Jenkins and the instance in charge of publishing the images to the Docker Hub. And we had all the tags, two or three years, tags that exist that were rebuilt suddenly. With no clue on why the Jenkins did not filter out these tags because we have a setup that say only build tags that are less than three days old. In that case, these tags were clearly, clearly more than three days old, so the builds should never have started. The consequence is that this whole build had the former build, former script that checked the 10 past weekly release on the three or five past LTS releases. And it was trying to rebuild and republished all of these tags on the Docker Hub, which is not really good news. We saw that because some users open issues saying, hey, the latest LTS or weekly releases have Alpine 3.9 version, which is a word and we realized, oh my, some of the images for these tags were rebuilt and pushed and some were not. That was a mess and it was really hard. So we decided to republish all the three LTS version Docker images that were in the partially wrong states. We republished also the latest weekly back in that time and we confirmed that the weekly last weekend, two days were also published properly again. We announced that on the blog post last week, if I'm not mistaken, because for people that are checking carefully the digest of the images, for people that are really sensitive on these kind of changes, we had to let them know that the change was there and they weren't being attacked by the man in the middle attack. So thanks for the help on this topic. We are really sorry for that inconvenience and we hope this won't happen again. We have applied a temporary measure on TrustedCI by running all these tags that were never run. Now they are all successfully built, which consists on a pipeline with no trigger just saying echo okay. We cannot delete these tags and yeah, we don't really understand what happened because no logs show any error or any kind of all these tags are hold. So we don't know why this happened. So I hope this won't happen again in the future. We are still searching other long-term solution. Any feedback is welcome here, but yeah, that's sensitive and I don't see something of use. Beep happens, thanks for the point and thanks for the article, Mark. It's pretty obvious now, you know. On the article actually, I think Damian, I may have understated in the blog what the impact was because Damian, I think you said that it republished many, many container images. It didn't just republish the three that we then fixed or the four that we fixed, right? We did last three LTS and the most recent weekly or two most, so there are outdated, there are incorrect container images that are still out there and we're not going to go through the effort of republishing those older container images because we don't support with security fixes or any other things on those. Therefore there's not real benefit to our users for us to go back and republish Jenkins 2.418 or 417 or 400, those are just there. Did I state that correctly, Damian? Absolutely. In the end, the real problematic versions are 2.415 up to 2.418, it's only three versions. If you used the weekly 2.414 or older, then I suggest you switch to LTS line because the latest LTS is built on, is forked from that weekly. Right, it's a review. Exactly. And then the other says, oh, we now have three weekly version properly built again as usual, so you have to update your weekly version there. That's why we don't, it doesn't seem, it did not seem worth the effort. That's not the same point of view for the LTS. We republished all LTS that were broken though. That's really important because that's in the name LTS. Oh, okay, so 2.387.3 was not harmed. Absolutely. Okay, I misunderstood. I thought it was incorrectly republished, but you're saying no, it was not. So the republished process that's documented in the blog post accurately did replace all affected LTS versions. Absolutely. Okay, thanks, all right. Okay, and if ever some users were complaining about their weekly version not working because of an old version of Alpine or whatever, we could then redirect them to, no, many should use something more recent. Okay. And I think that's, Damian makes an important point there. If you are using Jenkins Weekly, it is assumed by the Jenkins security team and by others that you will update roughly weekly. If you're not updating weekly, you should probably choose LTS and then you update roughly monthly. So it's not a, it's a matter of choose the line that matches your update pattern. Of course. That makes sense. Yep. Now the next subject, you did a 21 mark. Would you like to move that later on because you have a whole section about the... No, no, I think this, what you've got here is perfect. This is, let's go with this and we can talk to the other thing. The other thing is much broader than this. I like this one. Let's talk to this one. Okay. So we have just started with Stefan working on updating the GDGA 21 version on CI Jenkins IO. Because for this time being it's a pinned version, which is not an early access version. So we'd like to have update CLI to update from the current unstable version to an early access version when it is available. Am I right, Damian? Yes, on the infrastructure, but for the Docker, it's already using the early access version so that should be easy to update with update CLI. Cool. Thank you. And then I had a surprise last week because it looks like Jenkins 2.4.1.9 is already working with GDGA 21. It accepts when you're using a GDGA 21 on your machine. Of course, I had a good look at all the PRs and all the issues and so on. I should have known, but just as an end user I got surprised. I didn't have to use the minus, minus enable future Java on the common line. And I think that didn't make the cut into the official changelog. That's why I was kind of puzzled, but that's not a big deal at all. Who is using GDGA 21 as end user, frankly? I think I can give you the rationale. At least my assumption of the rationale is we need to facilitate early exploration of Java 21, but we're not ready to announce official support of Java 21 because Java 21 is not released. And so removing that enable future flag without documenting that it's been removed is pre-work for those of us in the development team that are doing testing. So I think it was intentional when Basel did not publish that in the changelog because when we're ready to announce Java 21 support it will be because Java 21 has released and we're based on a released version of Java 21. So it'll be after September 19 for sure. Yeah, it perfectly makes sense. I was just telling the problem the wrong way. That's okay. And then we saw a change into the packaging repo because Jenkins Core now checks if you're running in a supported version of JDK. And we also had a couple of script two, in fact, script one for SystemD and one for Debian, I guess, that were also making this kind of checks and that was redundant. That's why Basel proposed to get rid of them, which is so cool. Now let's move to the Windows agent images fixes. So Erwe, who couldn't make it tonight, made a lot of work because he wants that we now use Windows Server instead of Windows 1809 and that's breaking change. Not yet. Oh, it hasn't been yet. It's ongoing work. Yeah, it's not verge yet. We'd like to tell us more about that. No need to. So we were lying to ourselves and to the users if you use the Jenkins agent on the agents that are tagged with Windows LTS 2019. In fact, you have Windows 1809. Technically, these are binary compatible and you have the same distribution channels, but on one case you have the LTS distribution and on one case you have something which is more weekly or less supported. So that's not the case with Nano Server though. We use proper tags. It's only the Windows Server code. And we fixed that and discovered that when we had it the Windows 2020 to LTS channel. The reason is because Microsoft changed their naming and publication convention. We had LTS 2016 for base container images and for at least three years, we didn't have any LTS 2019. So that's why a lot of projects are sticking to 1809. So the goal is to fix that so we don't lie to user. We already are publishing explicit 1809 tags. And now the breaking chain will be for someone sticking to Windows Server LTS 2019, the content of the inbound agent and the SSH agent will change because it used to be built on top of an existing PowerShell image which doesn't have LTS version for 2019. So we switch to Windows Server Core LTS and we install PowerShell ourselves which means we might lose some element and the breaking change because it might have a different behavior for built tools. Yeah, of course. Let's wait for the first user to yell and then we'll see. Well, it's the same kind of scenario as what we just saw with Debian, the upgrade from Debian 11 to Debian 12 on the controller, right? We've got a user reporting, hey, I was running something that had behaved one way on Debian 11, you switched to Debian 12, it behaves differently. And it says, yep, sometimes operating systems do that. Absolutely, but we have to be careful on advertising that as a breaking change in the change log with the explanation because the idea is for a user facing that issue, the question is, is it something I did recently or is it something that broke on the upstream? And the answer is something breaking up on the upstream and you have to fix it now or pin to the old version which will continue to exist so they can continue maintaining their production. As the adage say, friend don't let friend use latest. That's still the same. Yeah, of course. By the way, Damian, I pinned you in the community Jenkins IO discussion because we were wondering, what is the official Jenkins control docker image change log? And it looks like it is in the official Jenkins core change log. You modified, thanks to the user feedback, your change log in the version in the repo. But it was already part of the official change log. So had he looked at the official change log, he could have had a hint that something was changing for him. Okay. Well, Kevin, thank you for that. Good job, Kevin. Next subject, once more, it's for you, Damian. You did with RV, I think, or ever did it, a big work of a tool factorization between docker agent and inbound agent, the end goal being merging the two or three repositories for the agent. Absolutely. Since I took months, even years before being to start, RV came to my head. His first step was to try to homogenize the tooling between the three repositories. So now we have one docker bake file on each repository for all Linux images of agent, inbound agent, SSH agent. And now he's working on Windows. And I believe he was able to do it. So you have one docker compose file for each of these. And the docker compose file defined as the same goal as the bake file. We haven't studied the new docker compose feature that allows also listing all the tags for each image, which is the initial reason why we looked into docker bake. That would allow us to remove hundreds of line of PowerShell. So I really hope we will be able to succeed on this one. For the end users, absolutely no change visible. We didn't break anything as far as we can tell. We might, I hope not. But the good news is that it allowed RV to quickly ship Windows 2020 to support during the summer and to catch the Windows error we see here. Another, let's say, ongoing topic here is then we are going to, and we are searching on test issues. But now I'm not the only one person to have a deep knowledge of each of these elements, RV is as well. So that allows us to improve the quality of code here. And the merge should be, we should be able to do it during the, during, before end of year. So no user facing things, but a lot of work under the hood. And a lot of fumes, of course. Definitely. Thank you for that. Next subject is Git 2.42.0. I guess it's about Windows. It's for Windows. Yeah, you're right. Okay. Yes. Git is still a topic that we should look into seriously for Linux. Because right now we depend on the Git versions of the distribution. Yeah, and then they'll do it with DBN. Nightmare. I believe, I don't know the complexity of building our own Git. I'm not afraid of that. And I will prefer having this so we can finally control to Git, because Git has a lot of easy bindings. I did it for years with Linux from scratch. So I'm not afraid of doing this for, with Ubuntu, DBN. I'll find out the history. Yeah. Damian and I have a disagreement on this. It's okay that we disagree with each other, but Damian and I have a disagreement on this one. Okay. I just love it because it's fun. And that's not a good reason. Yeah. Building your own Git is eminently feasible for many platforms, but I think therein lies madness. So before we say yes to doing that, let's be sure we have, we resolve the disagreement. Yeah. Is there any PPA or an official repo that would give us a more recent version of Git for DBN? I'll stop you right there. PPA is a really dangerous, dangerous. Right, I forgot. Exactly. That's either build it yourself or use the package manager for the vendor. So many bad habits I have to get rid of. Except for Ubuntu, we have one of the Git maintainers maintaining a PPA, but no guarantee on the fact that it works with the packages you have on your operating system. We use it on the infrastructure, but we don't want to use it at all on user-facing images like this. On the infrastructure, when there is a GitHub update, I personally check the integrity of the source code use for the PPA before verifying the code. So I'm sure that someone is not doing what thing on the PPA, and we accept the risk of breaking Git on the agent because we can test it really soon before deploying the production. Got it. Thank you. Now, Marc, the floor is yours. Let's talk about the various version of JDK and GenKills. Yeah, so we've got a, it's a three-part story here. First part of the story is Java 11 will reach its end of public support in late 2024. Just the vendors upstream, Red Hat, Temeron will stop supporting it. GenKills does not support components that are not supported by the upstream provider, operating systems, Java, et cetera. Therefore, Java 11 support in the GenKills project will absolutely end in October of 2024. So that's the concrete one we can talk about, and that one's pretty easy. The recommendation here is the 2.450.3, August 7, 2024 LTS release will be the last Jenkins version to support Java 11, the last LTS version. That, now that, however, that's the easy part. Now we get to the more challenging part, which is what's next? And this is where I've got to, I've got to generate some pictures and diagrams to show it because we need to choose a new minimum Java version for Jenkins beginning with September of 2024. That new base Jenkins version, Java version could be either Java 17 or Java 21. And right now we've discussed with the board and with the officers and the initial recommendation was please make it 21 because let's just get there. We'll have supported 21 for a year by that point. But I've got enterprise customers who can't move that quickly. And so I'll probably be bringing a proposal to make that thing be Java 17 as the required minimum version, not Java 21, and then propose a pattern for future releases where given that Java does a new LTS every two years, if we get Jenkins on the cadence of supporting the LTS Java, we then can say and six months later or nine months later or 12 months later, that becomes the new minimum Jenkins version. And we get there, but not this time. That's the current thing, but I've got to get some pictures together to show Java releases, Jenkins releases and how they overlap with each other. Yeah, because there's some intertwining or something. Yeah, of course. Right, exactly. Thank you, Mark. So Java 17 support, yeah, it will. Java 17 by the vendors support continues for a long time. Java 17 support by the Jenkins project will not continue as long as we did for Java 11, for instance. Just like we did with Java 8, we ended support for Java 8 in the Jenkins project well before the vendors stopped their support of Java 8. Any questions, concerns? I don't have concerns. I was just wondering, why the current status of Jenkins support for JDK 21? Not the official one, but I mean, I know some people make some experiments. I think Bazel run the bomb build with JDK 21. Some parts of the infra already supplied JDK 21, but I don't think it's running on JDK 21. What else? I saw lots of PRs from you, Mark, to test some plugins with JDK 21 and also saw some from Alexander Brandes. So it's progressing, but do we have a chart somewhere or something that say, oh, we are 10%, 20% something or not at all. It's not yet. I think Bazel is actually tracking a sheet somewhere that records what fraction of the plugins have already tested with or already actively testing on ci.jankins.io with Java 21. I know Jenkins Core is testing with Java 11, 17 and 21 confirmed. And we know that the plugin Bill of Materials with its, I think it's now pushing 100 plugins that are passing tests with Java 21 and the acceptance test harness is passing tests with Java 21. And we know that there are 100 plus plugins as individual plugins that are passing their tests with Java 21. So for me right now, we are about seven months ahead of the pattern we followed with Java 17. Java 17 released in 2021 and we didn't deliver a Jenkins version to support it until mid 2022. So I'm delighted with our progress on Java 21. I think we'll support Java 21 officially easily by the end of October of 2023. Yeah, good thing. And shameless plug. I tried to run Jenkins on nightly builds of Timurine's JTK 21 on the RISC-5 machine and it does work beautifully. Now, if you don't mind, we'll switch to new topics because I'm not so sure we'll be able to treat the recent changes. And I've already told about the most breaking chain that happened lately, which is move to Bookworm. That's what you have. The title is Bookworm made it for its victim. Anyhow, so Demian, I think he was the one that added these topics. Am I right? I'm not, but I feel like it's a really good idea. Who did that? I think it was probably me. Oh, then go ahead, please. So we had a discussion, I believe in the infra team, that the descriptions on the Docker Hub repositories as displayed to readers are outdated. They aren't perfectly maintained. And the idea was, hey, we should consider updating those short descriptions to declare that these things are deprecated. Oh, I know, this was Alex Brandus. I know where this came from. Sorry, it's not even me. Alex Brandus is the one who detected this. We're using these old terms, deprecated terminology, slave, instead of the correct term, agent. And he said, hey, we should put on the page that they are in fact deprecated. Yeah, that will help. But Docker Hub is already showing that it hasn't been updated in five years or so. Correct, so there are good hints that it's deprecated. Yeah, but sometimes it's not enough. Right, every chance we get to hint to people is one more chance that they'll heed the hint. You're right. And the next one, that's funny because I think that's something we talked about a few meetings ago. And it's not me that wrote that down. Windows ARM images might be possible. And so it's Windows, oh, that's you, that's funny. You happen to have a Windows ARM machine, right? I do, right. I have a Windows ARM machine because Microsoft was selling them and I was interested. So, but I think the answer right now is no, it's not. I haven't seen enough interest and I certainly have not seen enough any traction from anyone in the community to spend the time on it. Got it. Last time we talked about that, I thought that Docker was even not running on Windows. And that's part of it. It may not, they may not have Docker on Windows for ARM. I haven't even checked. The Windows ARM kernel is missing some important features used by Docker. So with the, now Hyper-V seems to have been recently ready to run on ARM 64. So that might be a solution, but it's missing critical features used for the container isolation used by default by Windows container. So that means right now, even if it work, your container will be micro virtual machines in an ARM unless Microsoft starts to publish these bindings. But right now the kernel modules are missing. And there is no reason for running a Docker engine in Windows container mode. You can still of course use a local Docker clients that is compile on Windows ARM and already available, but you need a remote Docker engine somewhere else. Right. Thank you. There's real power in us having somebody who knows the depths of Docker in this meeting. Thanks Damien. Oh yes. Thanks a lot. I think that wraps it up unless you have something else to add to the agenda. We are at 30. No, any feedback, question, comments, remarks? Looks like we're done. It was a joy to have all of you tonight. Thanks a lot for being here. The recording should be available from 24 to 48 hours and see you two weeks from now. Hopefully have a wonderful end of the day. Bye bye.