 Welcome, everyone. This is Jenkins platform special interest group last meeting of the year. It's December 19 2023. Thanks for joining us topics on the agenda today. Open action items Java 21 support. Summary of work that's been completed on agent and controller images and a pending change work in progress. And then Jenkins Artifactory bandwidth reduction project results. Any other topics that you want to be sure we add to the agenda. Okay, then let's get going. Great. Thanks everybody. All right, so open action items the blue ocean Docker container continues to be a needs to be communicated. It's not a separation. It's already. It is already un-maintained, not updated, etc. This is a separate repository and we've long ago years ago stopped suggesting anyone use it. But there's still more work to do. Any questions there. Okay, next topic then Java 21 support. So here what this really is is the two plus two plus two Java support plan. And you can find more details about it in the Jenkins enhancement proposal. It's still being reviewed. I have some significant additions that I need to make to it. Those additions will include things like what are the detailed steps we take when we drop support for a Java version. How do we change the palm files how do we change documentation how do we change other components. Likewise, what do we do when we add that sort of thing. Any question there. Okay, next, go ahead. Next then agent and controller image improvements. So the controller has updated to now we're at 2.4 37 as of today and looking good. I've got it running on my environment. No complaints or concerns 2.4 26.2 released about a week ago and again all all good and healthy. Others I'm not aware of any other changes any other changes that other people want to highlight as as key changes in any of our controller or images. The bump to Alpine Linux 3.19.0 starts to show the same message that we had on Debian when we switched to Bookworm when using Python environment. Which mean if you are installing Python dependencies using the Python the system Python, then you have an error message saying hey please use a virtual or switch to pip X. And that change landed on Alpine Linux and is brand new with the 3.19.0 so anyone building on top of either the controller or agent images using Alpine need to be careful. Pip X is easy. You just there is just a few elements we already have changed on Debian and provided the steps but be careful with this one because most of the distribution are now carefully protecting their Python system. Good so Kevin I wonder do this is probably one if you'd be willing to take an action item to check the tutorials for any use of Alpine with Python in a container. It just feels like that might be a might be a tutorial topic where we've used it and have an executed recently and thus would be surprised when someone tries to use use it with Alpine 3.19 like we automatically update now. And then find out discover Oh whoops doesn't work the way we expect you okay to take that action I'm Kevin. Yep, no problem. All right, good. I synchronously Kevin I'm sending you privately two links with two resources to example of what we did on the Jenkins in front that build on top of the Jenkins agent when we add that problem with the Debian version. That's exactly the same solution for Alpine. Great. Thank you so much. Very good. Thanks. Anything else on the container updates. Okay and these I believe are from our well I'll leave them here I think Bruno may have added them for us in prep for the meeting. Next topic then is work in progress on the images. I'm most interested in this one because it's a it's been a long time coming in as a great improvement, building both the agent and the inbound agent from a single repository. The mirror has prepared this spent quite a bit of work to unify what was previously two repositories and two independent build processes into a single build process using one repository, not yet merged approved by Tim. Everybody's waiting for one more review, but a good, a good step for us in improving our build process for our container images Damian anything you want to highlight there. We had a pretty smart way of testing the three kind of agent images. So the two from this pair and also SSH from Basil. That's a really good idea. He pushed her way to use the Docker plugin. So the Jenkins plugin, which provide a Docker cloud agent implementation, it's not the Docker from the pipeline keyword. It's completely different. The one I'm mentioning when you have a new build tells Jenkins to connect to Docker engine on the local machine and spin up an ephemeral container that will act as an agent like the Kubernetes plugin or Azure VM or easy to plugins are doing. Everything runs inside that agent, which means most of the time you cannot run Docker inside Docker, otherwise you will have security problem. That plugin has the ability to either start the agent by spinning up a container based on the agent image and then will execute a secondary process. Or it can use the inbound agent and so that the container will start and connect back the controller or I choose SSH agent. So it will connect through SSH like any SSH standard agent. So it will take care of spinning up the container with SSH within connect through SSH and start the agent process remotely through SSH protocol. So with a single plugin, we could specify one Docker cloud with three container templates. Each template points to one of the images and that has been used by Herve to test the result of his build by comparing before and after to demonstrate that it should not break. I believe that should be an acceptance test to think about for the container that we should run at least once a week. That will spin up a Jenkins setup, run a build with the free agent and wait for the build completion or time a route and say, oh, the agent are currently broken. So when we release a new version of each of the agent, we could have something way stronger. But it has to be balanced compared to the cost of running that acceptance test and the risk of that test being flagged, of course. Thank you so that and I as far as I can tell that's described here in this item in this item in the testing done section, in addition to all the other testing that have ever had performed. Good. Very good. Any other comments, concerns or questions about the unification of build agent of the agent and the inbound agent repository into a single repository. Okay, next topic then. So we've also got using Docker compose to publish images. And again, this is one Damien where it's, it's in draft right now. Is there anything you want to share in terms of why Docker compose is a better way to define Windows images rather than using PowerShell scripts. That's the same benefit as when we switch from shell script to Docker bake that allows to have a centralized definition which is the declaration instead of coding. Just with one file you can list all combination. We always need a middle ground between declaration and scripting and bake was perfect for that usage. Last during the test we run last year, Docker bake did not allow to build Windows image. You can run bake from Windows on the Docker Linux engine, but not on the Docker Windows engine. So we saw that there is an experimental support of Windows, which is from one or two weeks ago, shared by team. But as a first step survey check with Docker compose because it looks like we have the same area. So the goal is to unify as much as possible. And using Docker compose will avoid a lot of missed tags or when we change tags that will allow us to see, oh, here's the list of tag I need to remove this one and add this one. It's easier than, hey, we have a loop and eventually we will construct that kind of tags. Is it clear and does it make sense. It is and it does make sense. So we're going to increase our use of Docker compose now not just in this process but also we have last year we had a or in 2023 we had a Google summer of code project to use Docker compose in the documentation. Because right now our Docker installation instructions are sort of horrible. They are so complicated, so deep and so, so easy to make mistakes in. But Bruno Verrachten has been leading an effort to transform those using the Google summer of code work to instead use Docker compose. So instead of three pages of copy paste copy paste copy paste it's clone one repository run one command Docker compose up minus D and and much more attractive in terms of so good use good use of compose both places. Thanks. Anything else on the Docker compose effort. And our work in progress was JDK 11 and JDK 17 manifests adaptation for windows. And here I believe this is airway exploring ways we can improve the definition in windows. Damian anything you want to share there. That's that part has already been done on windows as far as I can tell that pull request is only to track the JDK version for 11 and 17 line that we use for windows so when there is a minor update. It's also added next to the Linux updates. Got it. Okay, so this one's relatively smaller. Thank you. Thanks very much. Any other work in progress items we should discuss. Okay, next topic then is the Jenkins Artifactory bandwidth reduction project. And here we we talk about. Well, let's talk first about the problem. In the month of November the Jenkins project used over 20 terabytes of data from one of its sponsors, JFrog. Okay, look, we see that that a lot of a big chunk of that is traffic that is fundamentally unrelated to or is should not be the responsibility of the Jenkins project. And they asked us to please change our Artifactory configuration so that we stopped using some of that bandwidth. We're thinking that we have completed now and we think we've got completed and one third savings of bandwidth with no impact with minimal impact to developers. Damien anything you want to share in terms of that one. Nope, that was quite clear. We are still cleaning up elements and finding some time to time plug in failing due to dependency, but that's minor annoyance. So, let's wait for the bandwidth report. Right. And that's an action item on me is I've got to ask them we implemented this in production on Friday. So we've now been three full days without without the, the added burden of these, these extra projects that were that JFrog said correctly could be provided by someone else. And we'll keep resolving issues as we as we find them. The top 250 plugins all pass the tooling continues to pass Jenkins core continues to pass. Thanks very much. Any other topics for today's session. Okay, great recording will be available in 24 to 48 hours. Thanks very much for joining. I'll stop the recording now.