 Hello everyone, this is Jenkins Platform SIG, we're on the 26th of September 2023 and tonight we have Hervé Le Meur, Damien DuPortal and myself. We have always the same open-action item regarding docker images and we'll talk about the ongoing work and then Java... Mark is not there and I think we addressed its subject on two weeks from there. I don't know, maybe we'll talk about that. Then we'll talk about what has been done on the controller and agent images and then we still have some time, we'll address a new topic about Helmchart and cube operators. So first thing first, container image deprecation for Blue Ocean container. It is deprecated but we haven't done yet the whole work about announcing that to the community. So most of people know but some people don't know yet. We have to do something about that. Maybe thanks to Jenkins, how is it called by the way? Jenkins improvements something? JEP? No, so it's not improvement, it's Jenkins enhancements. Yeah, that's it and there's already something that alerts you when you are using an operating system that will reach end of life or is already end of life. Maybe when using that kind of container we could do something like that or maybe that has nothing to do with that. Anyhow, we'll have to address that one of these days. Now for the JDK 21. We've been running on Jenkins Infra JDK 21 Early Access version for a few weeks now, maybe even a month, I just can't remember. And we also have some JDK 21 preview images for I think the three agents and for the controller, even for several platforms like I think we have that for S390X, ARM64, not all the platforms for all the images but nobody's asking for it. We're already ahead of the time. Anyhow, it's still Early Access, the definitive version is not yet there. On Terrain website today, we saw a new banner which was saying with a red background, we are rating access to the new Java 21 specification test before formally releasing Terrain 21. So they're waiting for a TCK, if I'm not mistaken, file sent by Oracle, which is funny because I was talking to Eclipse Committer earlier and some other vendors already got this TCK file and were able to validate their bills. But well, it will come when it will come. And what's funny also is that in their banner, they are giving three links to get access to their Early Access bills. And frankly, none of them are the one we are using. We are relying on the GitHub repo from Terrain 21 binaries. And we're not using their API, for example. We're not using either their Nightly Builds website. But we have the same binaries nonetheless. The releases published on GitHub are the same binaries we could find on those websites. So anyway, that's funny because yeah, go ahead. We are using the same binaries. In fact, their API is just a way to automatically generate the link that send you to the Terrain 21 binaries. The problem with that is that friends don't let friends use latest. You don't know what you're getting, I'm afraid. You know, you're always using the same link, which links you to the latest build, but you don't know if you have a plus 32, 33, 45. Yes, that's the same binary. It's just the front end that provides so you don't have to do the work that Stephen and you did, searching and generating the naming. You're right. That there's because we chose to use update CI, if I'm not mistaken, to grab the latest versions and that wouldn't have worked with that kind of links. We had to use directly GitHub repo. You're right. But yeah, that's the same binary. They use the GitHub repo on the back end. The link from the API when you click is an HTTP redirect to the GitHub repo on the end. If you run a curl dash, dash location, you will see the chaining of URL that ends on the GitHub repo. But as you said, it's always latest. Got it. It looks like it hasn't changed since August 11th. And that's okay. We haven't seen any newer beta since then. And that's fine. I'm waiting for the official one. And I've seen that some of the builds, because they have Jenkins to generate all that. Some of the builds are failing for some architectures. And some builds are working. And these binaries are updated, you know, for the platform that used to fail, and then they succeed, then they update the very same release. So if you can't find your working build for the time being, be patient. It may happen one of these days. Anyhow, you gave us a website to follow. It's GitHub, of course, an issue to follow. What's going on? It's much more precise. That's just me saying gibberish. So it's a list of things to do before publishing the official, if it's the right term, official tamarin release for JDK 21, or final, or something. Okay. So it's difficult to see. I don't know if you can see it. Yes, it was much too small. So most of the checkbox are checked. There are still the TCK job results. So it's a file sent by Oracle. They run a series of tests. And then we have the result pass or fail. And then, okay, why not? But no date available for the time being. We'll see. What did I have in mind? I had something. Okay, forgot. Sorry about that. Yep, whatever. Would you have anything to say about that? In particular, yeah. Okay. Just to thank you on reading some issue, looking at the ones who are closed, which are closed almost every day with people asking, where is my JDK 21 image? Yeah, now I remember. There are four levels of support, if I'm not mistaken. The P1 is the first platform they are aiming for. I think we have X64, then they have P2, which may be ARM32, Alpine Linux. No, I think Alpine Linux is even P3. It's even under. And yes, P3 for S390X, and so on. So they want to deliver the first images for the P1, then if they have time P2, then P3, and then there are the unsupervised ones. We had doubts a few weeks from now about Alpine ARM64. And it looks like the image is not always building correctly for Alpine. So maybe it won't even be P1, P2, or P3. It will maybe be unsupported. So we didn't promise anything to our end users. So maybe it will have to disappear one of these days. We'll see. Now, for 11.17.21, I don't think there is anything new in Mark's document. But I could be wrong. I think yes, we are sticking to the 2 plus 2 plus 2. And the last time we were discussing that Mark wanted to get some better diagrams for people to understand. But yes, that's something I wouldn't say revolutionary, but something pretty new for the whole Java ecosystem. And of course, for Jenkins also, we were still using GDK8. What is it? Last year, a few months ago, it was not that long away. And we're already using GDK21. We'll see. The new pace will be quite faster than what we were used to. The main goal here is also to provide deterministic change management. You know when you will have to prepare yourself to test the new version. And it's a trade-off between we have to follow the GDK update because it's an upstream project. I mean, next year, October, GDK11 won't receive any more security updates. Jenkins project must drop support of GDK11 at that date last minute. It's mandatory. We cannot provide Jenkins safely to our users if we rely on GDK without security update or support at all. So we need to follow the pace from the upstream project. But also, we need to find a trade-off to give enough time to our consumer, people who develop plugins, who will develop folks of Jenkins, or just builds or users that need to take some time before upgrading in production. We need to let them enough time. So it's a constant trade-off. Everyone wants to go either faster or in a stable way. The proposal here is that we will have the same rate as, for instance, Ubuntu, with their five-year LTS line. You know when you have to update. That's the near goal with contenting all users. Yes, you're right. Deterministic. That's much better for the end users. They can have a roadmap and know when they will have to move, not just in panicking. Oh, I didn't see the banner. Oh, too late. I will have to move to a newer version of GDK and I'm not ready to do so. Yeah, you're right. Definitely much, much better. Anything else about this subject? One, two, three, no. Okay, then at least I'll have prepared the work that has been done since two weeks ago on the agent and control images. So we have seen a few version bump on the SSH agent and the creation of GDK 21 preview images. So we had three new releases, the latest one being the 515. Oh, I think it was from yesterday. And we have PPC64, LE, F390X, ARM V7 for GDK 21, two, because of course we have ARM 64 and X64, of course, AMD 64. And it doesn't change anything for the end user, but we are now tracking the GDK 21 version just in case it would be updated. Then we moved the node alpine docker image, but it's just for the test. We just don't care. And the Debian Bookworm Linux version to a newer version. The most significant change was moving from bullseye to bookworm a few months ago. It was not weeks. No, no, weeks, weeks, weeks, okay. One month ago for SSH image, which was breaking change. It's quite recent and it's breaking. It is. We had a few users complaining about, it's not behaving the same way. And as we had a few users say that, I was, you know, complacent in listening to some other users saying, oh, I use Bookworm and my Jenkins plugin does not work anymore. For example, I had the example for LDAP. And frankly, I was eager to believe the end user thing. It's a fault of the move to Bookworm. But no, we have, what is it, thousands of users of the LDAP plugin, for example. And of course, the statistics show that lots of them have moved to Bookworm and it's working for them. So it's not linked to Bookworm. Yes, it's a breaking change, but that doesn't mean it's responsible for all things that don't work anymore for you. Yes, I got a proposal though, is that in the same way Mark is working on the GDK, I believe the ASAG platform could try to draft something about when will be the dates. Because the change is needed, as you said, but it came out of nowhere. And now we have to communicate afterwards. My proposal is that for the next major operating system for the Docker images updates to be proactive and talk about it before the change happened. And we say at that date we will default to this one. Because we treated that change as it was a normal update, which it's not. Anyway, it's just a matter of communicating and eventually write a note on the change log. The proposal is at least next time we have a pull request with a breaking change, the pull request body must have the change log written already as part of its body that say, hey, if you are using this, be careful and use the previous version. The good thing in all of this is that the user can still stick to the previous version. We have strict version. Now it's possible with the agent. So anyone can stick to a given version. If they use latest and it break, then it means their system has to be improved and not ours. Another point, one of the big changes that can already has beaten some users is that the Python installation by default is now being really, really pushy about choosing a virtual of their multiple solution. But that's a good thing because if you install a PIP package on that image, that could break the whole distribution. That's why you must stick to virtual or use PIPX if you want to have something helping you. And the reason I'm mentioning it is because we have a lot of users that need to install Python and libraries on the agents. Yes, I've been there on that. I have a set of Docker based on Jenkins that do use Python. And of course, when we moved to Bookworm, I was beaten like the others. I'm not a Python specialist, not at all. And I made a quick Google search and find the answer. But yes, I was surprised. And it wouldn't have been listed in the Bookworm change log, right? So even if we had told users here in the Bookworm change log be prepared, they wouldn't have been prepared for the Python move. It is what it is, but we can do better. Erwe is telling us that he has finally resolved Docker SSH agent Windows test, by the way, but tests are still failing on the Windows server code. Would you like to tell us anything about that, Erwe? Oh, that was just a note about some little progress. But yeah, I was happy because my tests were finally working and then I realized I commented out. So yeah. Almost there. You're progressing. And the thing is, when this test will be, will pass, I will be able to, I'll put my reflector per request in ready for review. And then we will have all these Docker agents build quite similar. And it will be far more easier to, to recoup them in one repo instead of That's cool. Yeah, thank you. Because I think either of you don't like the way that we have to republish a Docker agent, because the original one has been published, a new version has been published. And so having the mono repo for the three of them will help definitely. So thanks a lot for these whole big work of refactoring. That's something that is much, much needed. Thank you. Then for the Docker agent, we also had a few version bumps and a breaking change. And we had four wheelies. So we have bumped our client. We have a manifest for Git LFS on Windows. And then we bumped it to three, four, oh, and fixed installation issue. Of course. And for JDK 21, we have added a few platforms that were not there before. And of course, a breaking change, we have moved to Bookworm. That's funny because it's way later than for the Docker usage agent. Okay, it is what it is. Did you hear any end users complaining about that move? Not yet? Nope, not yet. Okay, so it may work. Same for Inbund agent. We had a few version bumps and we also moved to Bookworm. We added a few platforms to JDK 21 preview images. And of course, bumped the parent image. As I was saying, you have to deliver a new version for Inbund agent once you have released a version for Docker agent. Yep, it is. By the way, the updates are not synchronized with the Docker agent images. We are missing two releases because the update CLI process is checking for an IRM version 7 image, which it doesn't see, but it looks like it's published, at least for the Dash 8 version. So that's most probably update key issue. I'm going to have a look on this one. I think it's minor, but yeah, we need to check. No worries. No worries. It's just a minor thing. So that could most probably be a bug in update CLI though. That has been fixed since then. So that's why I was waiting today with the last change. Got it. Thank you, Damien. And then for the controller, we had of course four new releases with the new LTS version and three other weekly versions. And the last changes we have is a bump to Devin Buchrom and the bump of UBA 2881067. Now done with the list, I'll have to prepare. Let's go to a new topic, which is Helm charts and cube operators. Spoiler alert. I don't know anything about that. So I'm showing you the issue that has been raised by Daniel Beck earlier today. And it looks like he found a bug. Yeah, go ahead. It's good. I was about to say that he opened. Can you go up a bit, please? Yeah, it's a good one. Okay. So Damien, would you have anything to add about that? So we have that question. Is the Kubernetes operator we have here project still maintained and used? And should we invite their maintainer on the platform SIG? That's a raw question. Is it part of the official Jenkins platform distribution? Is it a P1, P2, P3 project? Yeah. As a Kubernetes administrator, I tend to avoid operator as much as possible. So I don't know. I've never had a case of running a Jenkins controller that could be sold by the operator here, because it makes some special assumptions that I don't like, but that's personal opinion. It doesn't mean that you should not use. However, yeah, its contribution are decreasing. So I propose that we try to invite their authors if they want to speak about that. And if we have users of this one to encourage them to raise their concern here to see what we can do about that. Got it. I don't know them, but it doesn't mean anything. Okay. The Kubernetes operator is an alternative as the official Jenkins and chart. If you have a Kubernetes cluster and you want to spin up one or more Jenkins controllers and their agents. The idea of the operator is that you use the Kubernetes representation. And basically, once it installed, you instead of Elm installing a chart for each of the controller you want, you just provide a few YAML and you have a kubectl get Jenkins custom resource. It's not pod, it's not container, it's Jenkins managed by the Kubernetes API endpoint. That's the goal of an operator in general. Yep. I don't know much about this one. So that's why I propose we discuss because if we have someone motivated taking over the project, that's part of the Jenkins platform ecosystem because we already managed the images on the Elm chart here. So that makes sense to have them here or to write down and synchronize with them that it's not officially supported, it's just a project somewhere by someone else. It's just that it's inside the Jenkins API umbrella. Yep. You're right. So let me put an action item. Maybe for me. Oh, you see, this plugin was almost abandoned before as you saw in the contribution graph. It's from a virtual lab. So it's a company. Initially, and they have accepted a new contributor which is outside of this virtual lab. So virtual labs gave them, gave this maintenance project. So this new maintainer is motivated, but there are a lot of issues and a lot of needs from these virtual users. So, yeah. So in that case, yeah, Bruno is that okay to invite them to the platform SIG so we can get partial help from us? Not for me, of course, but yes, no problem. You're driving, you don't have to know in details every piece of technology here. I mean, I don't know system calls by her, but yeah, that would be interesting to invite them and to see the problem statement with them. Yep. So I will look at the commits and find the handle of the new maintainer and invite him directly through GitHub if I can, or I try to find him. Thank you. Oh, thank you. See you maintainer. Yep. A few words now in need of help. Cool. Thanks a lot, folks. Anything else you would like to address before we wrap it up? I take that phone. No, thanks a lot for coming to the meeting. The recording should be available from 24 to 48 hours and see you two weeks from now in the meantime, enjoy Jenkins. Bye bye.