 Welcome everyone. This is a Jenkins platform Sieg at gender. We are on the 31st of October 2022 and today we've got Damian DuPortal, Kevin Martens and myself. Welcome folks. We have quite a few subject in the agenda today. The first one being open actions items. What haven't we done yet? We used to have some PPC64 LE agents on CI Jenkins IO, but it looks like we lost them somehow. The low node program is finished now. So we have to drop all mention of PPC64 LE in the infraindex and if you make a quick search on Jenkins IO, I think we will still find some. So it's still open. We have to do that. It could take quite some time. It's okay for you. I'm opening an infrastructure issue on the desk about that task. We will take care of listing the elements so we can start working on that part. Yeah, that's a good thing. Thanks a lot. And let me know when it's done so that we can modify the agenda. Yeah, thanks a lot. Next subject. It's on you also Damian, so Git 238.1 update campaign. What's going on? So two CVs have been fixed by the version 2.38.1 of Git that covers two. So the version 2.38.0 was concerned by the security issue, but also the previous one that we're using today, 2.37. So we have to update to that latest version to avoid we don't risk any CV, even if the exploitation is not that easy. It's not high-criticity, but still we have. So that should be a campaign of updating the Git version on all our images. That's an action item. Yeah, so let me just change the indentation that it looks like it's part the action items. Cool. Done. So it should take quite some time because we have the Linux VM, the Linux container for CI Jenkins IO and Windows VMs too. Well, it has to be done, nonetheless. Any other comments on that subject? Nope. Okay, thank you. Anyone interesting on being, if you are faster than me, don't hesitate to send your pull request. We can spread the pull request. There are multiple repositories, so don't hesitate to get started. That will be a nice Actoberfest subject, of course. Yeah, that's what I wanted to add. It's still Actoberfest period, so you can get some swag just by changing to Git to 38.1. Thank you, Damian. Next subject is also you, Damian. We had some issue open on the infra saying that we maybe had a problem with the volumes in the Docker agent. So something is about to happen. Somebody should work in the following weeks about that. Because for the time being, the volume for home Jenkins has some trouble. We have some issues with that. Maybe we shouldn't detail them here or now. But yes, that's something we have to work on in the following weeks. So we'll have to change it to home Jenkins and to agent work there. I don't have the reference of the issue. Let me add it on the note. Documentation is important here. Just a note that's nice that we have Kevin there. Because I've seen that quite often the past weeks and months of user lost when configuring a Jenkins agent. They were lost on understanding what is the root directory when configuring an agent. The root directory is the directory in the remote system where the agents download the temporary files and then start a workspace and will create one workspace for each build that is going to handle. That one is not always by default, depending on the images or the plugins or the setups. That directory may change. But that directory usually is recommended to be under the home directory of the user executing the agent. Because you could have multiple agents with multiple working there. And you want it to be multi-toned. The error we did a few months or years ago was to define slash home Jenkins as a data volume, which creates a usability concern. Because if you want to extend our own image, you cannot copy custom tooling inside home Jenkins. That will be lost. Yeah, been there, done that. Absolutely. And I've been beaten and everyone has been beaten by that one. So we have a usability issue, but also documentation because we have numerous locations where we have numerous documentation for each agent. If I only focus on Docker in bound agent and Docker SSH agent, one say you have to configure an object on Jenkins, an agent, and then you have to connect it to the agent and you pass the working there as a flag. So it's one direction. With SSH, it's the opposite. You have to tell the controller that will decide where is the agent. So that's why having a documentation for each agent that mentioned, hey, the working directory by default is that path, a screenshot eventually. And if we have an agent configuration reference on the main doc, we can point to that reference each time. That will limit the mess of each plugin having a different default version between Docker cloud, EC2, Azure VMs, Azure containers, SSH, et cetera, et cetera. So that's why having a single change for at least the Docker images is a good start. Sorry. Sorry. It was just a good idea at the beginning regarding the performance. I think you explained me one day that it was because using it as a volume gives some better performance than just working inside the machine without any volume. Am I right? Okay. But still, it has to be a volume in the case of Docker images. We have users complaining that it creates empty volumes on Docker. That one is not a problem because that one is absolutely the normal behavior of all Docker images that you can discover on the Docker hub. It's a normal Docker process. So we can help them on learning and growing on Docker usage, even for development. But that's a normal Docker pattern. And the other way around, if we don't define the data volume for the agent work there, the performances will be so much worse, like then to 55 times slower. So we needed that volume. And that's all. Okay. And for the volume that gets forgotten, nothing that the Docker system prune or whatever could get rid of, we can remove them pretty easily. It's easy. Thank you, Damian. Oh, I still see some references to JDK8. I thought we were about to forget everything about JDK8. So what's going on? So, Team Yakom already and I think Alex and some contributors, usual maintainers, have removed JDK8 from our Docker agent images. The controller has been already done a month ago. However, with the recent issues with the Docker inbound agent and Docker agent tagging and release that were going back in the past. So we have issues and we are totally bad at that. So I'm working on fixing that. It takes time because it's not easy. But one of the last elements will be to have a JDK8 branch that will allow us to do one last release of these agents that should be frozen on time. And then we can close or forget about that branch. That branch will be used in case of security issue though and would help us to have most of the fixes that we already did on JDK11 that could be done on the JDK8. The volume agent is one of these fixes that have to be cherry picked. So once this element will be done, we can then continue forward with JDK11 and 17 on the main branches. So that's a proposal that comes from me. I haven't seen anyone since the past three weeks going against that on the repositories on the usual maintainer. So I'm going to work on that part. Cool. Thank you. Next is finish fixing main branch releases, avoid overriding existing versions. I think it's linked to what you just talked about. Exactly. Absolutely. That's fixing deployment. Yeah. And it's also linked to something that we'll talk about later or maybe we can just address it just now. The container repository management origin agent. Let's discuss that later. You already have the documentation. Thank you. And so the last subject of Docker agent is the proposal to merge the three repos into a single one, the atonal, you know, the mono repo versus multi repos. I can see some fights on the web just about every week on that subject. Yeah. Let's wait for the, you have more details on that topic. Next is container image deprecation for the Blue Ocean container. Kevin, you made an amazing job of putting warnings just about everywhere on the Blue Ocean documentation. Please stop using that. I know that it's deprecated or won't be, it's maintained, but you won't see any new features coming in the following months. It's not dead, but anyhow. And we still have to do, that's kind of the same thing for the containers of Blue Ocean. So maybe Mark will do that. I don't know. We'll see that later on. We have to update the page on Docker hub. We have to make a change log, a great guidance on, and it's not just because we should stop using Blue Ocean. The container images are pretty much updated. In fact, I even don't know somebody is taking care of them. So anyway, it's not a good idea to use them these days. Oh, let me know if I'm wrong, Damien. I think that's a good summary. Blue Ocean is still used, but not really well maintained, except for security. There are projects, but nothing to replace it as it right now. However, maintaining that image was created on a time where Jenkins configuration as code and the Jenkins plugins CLI weren't a thing. So it was easier for the community to push pre-built images with Blue Ocean already installed. As for today, if you start a brand new Jenkins, the setup, the default setup proposes you to install Blue Ocean by default. You can always install it manually on your existing Jenkins instance. Most of the time it's already there. Third, you can use Jenkins plugins CLI either on the default container installation or on your own system if you don't use container. It's already pre-default installed on Kubernetes installations. So yeah, we have a lot of tools and ways of easily going from zero to Blue Ocean in one command line. So that's why it doesn't make any more sense to maintain such image, because that image is always not up to date. Cool. Thanks for the details. So as it's written, not likely to make progress until November, I guess Mark wanted to address that when he comes back from PTO. We'll see that later on. Now, container repository management for Jenkins agents. We already addressed some parts of this subject, because yes, we may want to unify the existing repositories for the agents. And we also have to work on the release process, which is not that easy these days. Yep. So the main driver for these merging repositories is the following. We have inbound agent that depends on agents, which mean for each GDK version, then each operating system, which has multiple variations, we have to duplicate the same update of the operating system and GDK version. Inbound has a mapping 101, but already we have different naming and different operating system between both. Second, Docker agent alone is a nice route that we can extend for the inbound agents, but we never were able as a community to do the same with SSH agents that could iterate from this one without having too much duplication or unused item. So the goal will be to have that element. And now, since we have Docker multi-stage builds, we have Docker bake or even just a single Docker build for Windows machines, the goal will be to avoid the spread of this dependency, because the matrix with GDK's remote inversion, eventually SSH, all the installation and the tools that were already like Git and the rest, when we have to update one, it's a nightmare. So the goal will be to have only one list of dependencies centralized and then a system that build in parallel with the same layers. And so we can iterate what is inheritable or should be, and we can have specialization depending on the use case. That's a tool that has been paved since years by a lot of contributors. So thanks for this contributor. And now we have everything in place to merge everywhere. That will not change the name of the Docker images if you are using in bond agent today. That will keep the same. However, that will change how you see the release and release notes, because they will be all in the same repository. So we will change the tagging convention. For instance, one of the proposal, it's not reality, but one of the proposal will be to have, let's say a tag name inbound dash agent dash the tag that you have currently on the other repositories. So for the same version, we can have different tags that point to different artifacts. And multiple tags could point to the same commits. It's only for audits and contributors. But on Docker Hub, we will keep the same images. That's the whole idea. I see. Technically speaking, that sounds really interesting. And for the maintainers to come, that should be easier to maintain when the heavy lifting to convert everything to a single repo will be done. Nice. Thank you, Damian. What else? I think we're done with that subject. Require Java 11 on your for Jenkins core. I think so, the dropping Java 8 from the Docker agents has been merged. And you said me earlier, it had been released. So we're done with that. Pretty cool. And some plugins, I don't have the numbers yet, are beginning to require Java 11. So it's an ongoing process. I guess that's a good idea to move the plugins to the GDK 11. Do you know, Damian, why plugin maintainers shouldn't move to GDK 11 or 17? We don't force them to do so. Can we expect their plugins to work and be able to release new releases in the coming future, even if they're still using GDK 8? I don't want to force anyone, I just want to have the status. Should we or do we have to move to GDK 11 for the plugins? So today, if you are maintaining a plugin, you should test it with both Java 8 and Java 11. Java 8, because there are still some people using Java 8. So you need backward compatibilities and Java 11 because it's the standard default. If you don't use Java 11 today, that means your plugin cannot be installed on the new LTS. So that's quite problematic. Going to Java 17, it's not required today, but you should really start going on that direction. If you don't know how to build and test your plugin already today on CI Jenkins, I always the free GDK that we provide. Don't hesitate to look at the build plugin documentation or open the topic on IRC, Gitter or Community Jenkins. But today, you can absolutely set your plugin to build and test on the free GDK on Linux and Windows. So you can have six builds in parallel that try all the available. My bad. Sorry. Java 17 on Windows is not available yet, but it will soon. So my recommendation is if you are maintaining a plugin and when you are hearing this word, please enable for the free. If you already did the work for dropping Java 8, no problem. You can remove the hate from your CI process. However, communicate correctly in the readme. Don't hesitate to mention that Java 8 has been dropped for your plugin. Okay. And I've seen, I know if it's a urban legend or not, but I think that for Java 19, we could have some problems with Jenkins Core or even Jenkins plugins because some things, you know, we always have some warnings about illegals, status exceptions, something like that. And I heard that maybe with Java 19, we will have some problems. This will not be a warning anymore, but growing into an error. So we'll see. Definitely you should upgrade your plugins to Java. Sorry. I have no idea. Okay. I know. It's maybe a urban legend. I don't know. I should have checked. And now it's a good idea. What I remember from what you said that it's just a good idea to move to GDK 11 and CI. Jenkins IO, if you are a plugin maintainer, supplies you with builds for 8, 11 and 17, except on Windows where there is no 17 yet. Pretty cool. Thank you. And regarding GDK 17, by the way, you're using it, maybe not everywhere, but you're using it on CI Jenkins IO, am I right? Seven. Yes, but Linux only. Linux only. Yeah, of course. There is an open item and we are working on that one. So I don't have the numbers for October. Unfortunately, there were 11,000 GDK 17 install for Jenkins last month. I can only help this as grown. This month. I don't know where to find the numbers, by the way. I think it's Mark who can accept that. Anyhow, that's a good progression from August months and the forum. Anyway, next and last subject is a contributor summit remix we work. Kevin, you may be the one who knows everything about that. So I know what you have here on the page. That's pretty much it. Honestly, I think that that still has to be developed and figured out a little bit more. And at least obviously those scheduling still needs to happen for sure. So yeah, more to come. Thank you, Kevin. Folks, if you have any other questions, subject comments to do, it's the right time. If not, that's okay, too. We had a great meeting. Thanks a lot for coming. The recording should be available from 24 to 48 hours on YouTube. And yeah, see you in two weeks from now. Have a nice weekend. Bye bye.