 So how's the recording started? Yes. Oh, sorry about that. I thought we were still waiting. Anyhow, sorry, thanks a lot. Welcome, everyone. This is a Jenkins platform Sieg meeting. Today we're on the 6th of June, 2023. And we have Mark Wait, Damian DuPortal, and Kenneth Salerno. Hello, everyone, and thanks for coming. We have the usual open action items, but one of them is somehow progressing. We'll see that later on. We have the ongoing, what has been done, and an open question in the end. So first of all, for the open action item, we have the usual Docker images. We should let people know when our images get outdated and so on. So most of the time it doesn't progress, but Mark, you created or possibly debated in a major update regarding depreciation. We talked about that in the previous meeting. And the first effects should be seen from this week or something like that. Already have been seen. And the first bug report has already been received about a mistake I made. So yes, and the first fix is merged and the first fix will arrive in today's release of Jenkins 2.409, so. That's super cool. But it's not yet about the blue ocean container image. Correct. It's not. You set up a framework and foundation work, name it, that will allow us whenever the time comes to say, hey, it's deprecated, or am I wrong? Or am I right about that? You're correct. We need more work in order to do, in order to deprecate containers independent of the operating system on which they are running. So the container image can be deprecated even though the operating system is still fully supported. These blue ocean containers, for example, are actually running Debian. And we continue to support Debian, but we don't want to support that container. So what we'll do is we'll add a little bit of some extra stuff into those container images that will let the detector say, oh, you're running a deprecated container image switch. Cool. Good to know. Thank you, Mark. So what about the first bug that you already corrected? I guess it was not major. No, it was just that it mistakenly, the warning mistakenly appeared telling you that Fedora 38 end of life has happened. And I think Kenneth actually is running Fedora 37, so he didn't detect it. I didn't detect it, but a user did and said, look, Fedora 38 is very much still alive. Oops. Please correct your message. And it was only an off by one error. I don't know what everybody's complaining. Okay. Okay, that's funny. Last time I looked at comintegenkin.io, I didn't see any people complaining about what are you saying my operating system is out of date or something. I don't know if this has happened since yesterday. No, well, no, it was so last week's weekly release included it. And the only people who would see that warning is if they were updating weekly. And we like it when they update weekly, but the reality is many people don't. And so if you're running a weekly, but you don't update weekly, then you're at risk, but we can't fix that. And we can help that people that use old operating system don't update the weekly. And actually, I don't think I would, I'm trying to think if I would catch it in my build environment. No, I did switch to Fedora. So I shouldn't, yeah, but. Yeah, but you're on, if I remember you're on 37, aren't you not 38? I do, I pulled down the latest base image. So I would be 38 for me. Good, well, so then. Build it when that was out there. Well, then you may see that message in, if you consume 407 and the next release 409 will fix it. I see. So I have to check my build log now. Do you have any curious? Yeah. Your build cannot succeed because there is no Jenkins war for the 2408. There is a new git tag. So your build must fail or you must check your build because that means you are downloading an unexpected war that's come from I don't know where to care for in that one. Yeah, so just, yeah, just check that. If you see it, if you, the other is you would only see it if you look at the user interface, right? You have to look at the top level page. It doesn't contaminate any build logs. It doesn't, it just appears at the top as a little warning indicator. Back to you Bruno. Yeah, thank you Martin. Next was the ongoing work. I started, I think, two weeks ago work on alpine mages with Damian. The first, which was it? Docker is each agent? Can't remember all Docker agent. It worked. But then I stumbled on an issue related to regex, of course, for a deadline. So for the time being, it's stuck waiting for some help. Yes, without regex, timestamps, and char set, we would be out of work. Anyhow, what else? Yes, CentOS 7 early end of life. So Mark, your pull request is merged pass off to 407. So you got some warning from users because there was something about Fedora, but CentOS 7, no news from anybody for the time being. So the blog post is now, yeah, visible. Thanks a lot for sharing that. And I guess if we had a look at the logs for the Genki-Sahir website, we'll see a peek from now on regarding this very much, this article, hopefully. And as a, we have to remember that this will appear in the LTS in August. Yeah, that's it, at the end of August and not before. Mark, aren't we supposed to go to, oh, no, we don't know yet. Sorry, it would be 2.413.x. We don't know, we don't know what version we're right. This is, there's an X there, right? Yeah, almost there. Yep, so it's still the right candidate, we'll see. Cool, next thing, we have to replace N2S7 usages in our Jenkins controller containers. That was something Damian you proposed in the last meeting. So I may have misunderstood what you were looking for, but nonetheless, I made a draft PR that I wanted to be an open discussion with the community, except that it was not clear at all in the description of the PR. So I have something that works, but it's very much a draft. Of course, there are lots of things which are hard coded, like the checksums, for example. So it does work, but it needs a lot of work, and I'd like, if possible, that the community chimes in and says that's a good idea, that's a bad idea, maybe we should do something else, we'll see. Damian, you had quite a lot of pros for doing so, but also maybe a few cons. No comments, that protocol demonstrate that we can keep the multi-stage build with the G-Link using the direct installer from Damarin. The initial idea was to consider that solution as a fallback if we see issues with the Alma and UB selection. In the case, in the eventual case of we don't, we cannot use a parent image that would have the same, that would have a compliant GDK installed within. The probability of this was low, but not sure we had to check. So that was a proposal. If we are stuck on one of the HKZs, we could directly download the official Damarin zip and zip it and build the G-Link. So that the proposed official that should be compliant will be g-linked and adapted to our image. What you did demonstrate that it works very well. It requires a bit of code that might require the checksum with the update CLI or Renovabot or Dependabot if we want to keep it updated. But the outside is that work very well. Another reason was because sometimes, not all the time, when there is a significant GDK update at Damarin, the installer is available way sooner than the Docker images. Most of the time, the default Ubuntu image with the Damarin within is available immediately for Intel. But then the rest, it depends. Sometimes the ARM are there quickly, sometimes not, sometimes same for PPC or some to S, UB, RHL, et cetera. So if we have the installer, the installer is the first step when they publish your release, especially in the case of security releases of the GDK, that would have been a reason, a compelling reason to switch from our easy multistage from the official parent image to a multistage like you implemented. Given the status and there is no clear consensus and the effort on that part, we first need to update the base Alpine, base operating systems. The proposal I'm making here, I'm chiming in, is to say, let's consider that an experiment, close it for now and have in mind as a community that we have that solution working in case we are stuck and then focus on, let's continue tracking and automatically updating all dependencies and then consider the solution when we will track all operating systems. I like that. Yeah, you told me the other day that it would be problematic to depend on the existing Tamarin Docker images in case of a CVE because it could take up to three weeks before we get the correct image. So that makes sense in that case. But also, I don't know the status of the CLI for these images. So is it already working or are we depending on the pandabot? The pandabot? For the GDK, the pandabot is, it's not easy with the pandabot. Most of the time, the pandabots try to latest of latest. So we have to shut it down most of the time. And also, the pandabot is not able to update to talk of bake files. It doesn't know the syntax, doesn't provide any. So it's only... Okay, sorry. So it's already working with a better line somehow, right? For some and for some, we might have the pandabots when we use a from Eclipse Tamarin with the GDK version written directly on the from instruction, not from a build org. Because the pandabot doesn't support Docker file build org as well. It's only from instruction. So there were a lot of work ahead of making that kind of change. Yeah, exactly. Being sure that we track the GDK version on all the images is a good first step. Getting the checksum... No problem. Getting the checksum is quite easy. We do that on the infrastructure and we already experimented. It's a bit of scripting on update CLI, but technically there is no problem on this one. Okay, I see that's a bit better with... No problem. ...the CLI regex. I tried my luck with that. Yeah, you can work with both in parallel if you are stuck on the regex and don't get help or are frustrated with that because I can be quickly frustrated by regex as well. But in that case, the checksum parties gets the latest whatever version of Tamarin. We do that on the Jenkins infrastructure and then you can have a dependency that say from the retrieve version, then do whatever request grep regex somewhere and extract the checksum. So update CLI would open a pull request saying, okay, I'm changing the version here on the checksum here. The security process will be the person in charge of reviewing and approving this would have to validate the checksum manually outside before approving as maintainer to be sure. Okay, update CLI hasn't been corrupted by your malicious process. Got it. That sounds pretty exciting. If you send me the string you're trying to master on, I have a lot of experience with regex. I can help you. Cool, they are the Golang version, I guess. If not that I don't like regex, I do love regex. I use regularly MX with regex with some kind of success. Even online Golang regex tester tell me my regex work. That's what I am. Yes, Kenneth, I will send you something to chew on. Thanks a lot. My bet is it's a problem in the user experience with the date CLI, which is far from perfect on that area. Uh-huh, the interface between my brain and the keyboard, which happened to be mine. Maybe, also. I don't think so, I don't think so. Anyhow. Maybe I can't read the logs. But anyway, that's not the subject. Thanks, that's much clearer to me now. And there are all the operating system and of life which will happen this year. Yeah, that actually I think we can just delete from the notes, that's- Yeah, thank you, Mark. It was from last meeting, I copied paste, bad boy. Then, frankly, I read the pull request to, you know, your proposal to switch Alma Linox from 8 to 9. I saw that you closed it after putting a hold on or something. I read the whole discussion and to be totally honest, I didn't get everything. There were lots of inputs and I don't know if you could make a summary of what happened in the pull request mark. It was overwhelmed by the amount of information and I don't know Alma Linox. Yeah, so it fits exactly with the question Kenneth asked last time, why Alma Linox instead of UBI? And so that question was asked in Oliver Gonja who maintains the UBI container images, said, yeah, you're right. However, there are limitations in UBI that are not there in the Alma Linox container in terms of which packages you can use. So what Oliver suggested was, hey, when people ask for an upgrade, we should point them towards the UBI containers and see if that works for them. If it does, done, because it's one less container to maintain. No reason to upgrade Alma Linox 8 to Alma Linox 9 because we've already got a UBI 9 that they can use. And so if they need the Red Hat 9 level of operating system, of utilities, et cetera, they can just use UBI 9. Yeah, and I think there was a confusion from one of the users that you needed a subscription to install any packages, which is only true if you needed to extend beyond what UBI comes with in the temporary, right? And I use UBI all the time and find that it has the packages I need. So they need to detail what are these packages they need that's outside of that. And maybe that's even in the Apple extended repository anyway. I thought Oliver's suggestion was, I think, well taken. And it already, the UBI 9 container delivers command line get, for instance, and get large file support. So obviously it's not that you're blocked from installing any package because we install those as packages. So it's a valid thing to say, if you need Red Hat Enterprise Linux 9 utilities, et cetera, use UBI 9. And we've got a container already with it. I mean, full disclosure, I mean, I do work for IBM. So I'm gonna be a little biased, and I've been a Red Hat customer for almost 25 years, maybe longer. But so I have a lot of experience with Red Hat in general. But I find that all of these derivatives that are rebuilding the source, there is a lag here, right? It's not gonna be quite as up to date. Maybe that lag has decreased recently and it's gotten better. But I think there's just less and less of a, since UBI has even existed, there's just less and less of a reason to be using these derivatives. So I just don't see why. Great, thank you. So did that cover this, to give you the summary that you wanted to hear, Bruno? Yes, thank you. Much better when you are telling me the truth except of me, instead of me reading the whole thread and not getting much of it. Yeah, thank you. It's much clearer now. Next subject, what has been done? A few updates on the Docker image. It's nothing major this time, no breaking change. Let me know, Damian, if I'm wrong on that. So SSH agent 5.3.0, just a few dependency, nothing major. Docker agent, same for that, a few dependencies or chores, nothing more. No update on Docker inbound agent. Did I miss anything, Damian? So no, that's okay. Docker inbound agent should have update CLI request today because the release of the parent was done yesterday. So another example why having two. I guess there has been multiple updates, CLI updates. So that might be failing. We need to check why, but we should be able to use the Docker agent parent soon. Why minor updates? I didn't spend too much time on the agent this week because with my infrastructure side, we migrated the controller, the private controller in charge of pushing images to the Docker hub from AWS to Azure with a brand new machine. So I didn't try too much update, but still this one to validate end to end that the new controller is able successfully to pick a tag, build it and push it as we expect. That was a under the hood change, but that's why the change log here are tiny. Yeah, no problem. You did a major update, upgrade, move, whatever this week. So of course nothing was expected on this side. Thanks a lot for the work you did by the way. Now Damien, there's another other subject for you. We've been talking about Docker hub stats for weeks or months now and you have created a shared spreadsheet with lots of information. Could you tell us a few things about it? Yes, so these are the stats directly from the Docker hub for each month. So I only went back in time until beginning of this year. We can go back in time on previous year if needed. So I don't know how much we have access. I'm not sure who else have access. I'm admin on the organization with due to my infrastructure hat. We might have the possibility to open that access for other person because if me being is a bus factor that's not really healthy. Right now I've added to my monthly billing, reporting of the infrastructure, the task of also reporting the download stats here. That's short term. On the content for each month, you have two checks. One which is a summary per repository. So per image and one which is details with per tag and per checksum. You can see the amount of pool by tag by digest. There is also the unique IP separation that pooled the image during the time period. I don't know what version check is though. I'm not sure is it someone only downloading the manifest and not the layers? That might be, but I don't know, that's the answer. Yeah, that's almost all. Some things that after a quick review we still have a lot of downloads of image that shouldn't be used anymore, such as GNLP-slave or slave. So yes, the work that Mark is doing with the deposition, yeah, I think once it will be done with the operating system, we should think about, oh, to be sure that these images generate a big red message on Jenkins console that say, when I see you are using an unmaintained image which is a full hall of CVs and stuff, something really scary, right? Yeah, but there is no magic I guess. We will have to supply a new image. You can't hack it. Oh, no, no, no, no, in this case, it's much better. We've already supplied the new image. They're just not using it. Yes, but these two, we supplied the new images long. And they should have switched long ago, right? Because these two are really this one, right? They're these right here. So at least for those two, the legacy there people just aren't paying attention to their system. It works and that's good enough for them. So interesting to track these numbers after you will have the operating system deposition because this image shouldn't be updated. If they are, we have another problem which is on our side but if they aren't on that hypothesis, they will use all the operating system version. So at the moment in time, the old Alpine CentOS Debian version will be mentioned as out of date. So in any case, that will have an impact. Okay, so sorry, I wasn't clear enough. My question was will we have to rebuild those kind of images so people get a big red warning or will your system, the system you implemented detect them and show the information? The idea that's still evolving is that the controller should somehow see that it was asked to launch a container image that's label is Jenkins colon JNLP-slave and it should say in some place, you're using a container image that we know you're not supposed to be using. That was the concept in my mind. Okay, we'll have to discuss this. I don't think that's the place. That might be a lot of case that will be missed but still useful. I feel you will have to implement that on each of the plugins that have some other concept of container. Kubernetes plugin, Docker plugin, yet another Docker plugin. Thus, more discussion needed. Absolutely, no question. More discussion needed. But yeah, the idea, you're correct Bruno. The idea is that we should avoid having to build a new version of these images. Yeah, definitely. Thank you. So yeah, that's all for the statistics. No, eventually the detailed statistics can helps us about the operating system but we need to create dashboards or spreadsheet with formulas to extract all the variations because we have the SHA here. So how do you tell if someone want the Alma Linux but use the SHA? The SHA does not contain the information. So there is a work to associate the SHA of a tag so that we need another database or source of information in order to really have to join between information. That's raw data only. Okay, I don't want to name names but John Mark loves to do that kind of spreadsheet, graphics and so on. Never know, he could help with that. Last question I have about that is that a manual shore or is it automated in some way? No, I don't know how we could automate it right now because that will require highly privileged credentials even if we restrict to this one. So that name in a token that we need to rotate with two FA enabled. Same for adding, I mean, it's a matter of credentials. So yeah, and the time for automation and tests clearly doesn't, it's not worth it compared to five minutes per month to updating it. The important part for the community who is ensuring that there isn't only one person we access to the raw data. At least the second person active in the community should have this permission. That's really important and healthy. So what are the other use cases for this data besides trying to figure out if people are using images we don't want them to, anything else useful? Sure, well deciding, I see one right here. There's this project called Evergreen that was a long ago project that we switched off. There's another one called Jenkins File Runner and these are both indicators that they are really switched off, right? The very small numbers there are good measures for us. Ah, okay, we do not have many active users who are running those things that were switched off. So that kind of data helps me a bunch in deciding should we document anything about Evergreen other than its non-existence, no. That kind of. Okay, if there are 21, sorry, go ahead. 21, I was just thinking 21 unique IPs is not zero. It's not five, but it could potentially be, it could potentially be one farm of servers, but. Well, and for me it's the ratio between this number, 114,000 unique IPs and 21, given that ratio or even the agents ratio, which is, let's see, where's the inbound agent? Inbound agent is here. It's towards the top. Yeah, 98,000, so 100,000 to 21, three orders of magnitude is more than good enough to say that we've succeeded in not having that running widely. Mr. Shuppi's question was, will we ever switch off for real? I mean, deleting from Docker Hub any of these images to get people to react or. I don't think there's any motivation for us. Is there in general Damian, I've not seen other projects do deletion of containers. If you choose to run a container, all right, you're accepting the problem with that. I'm not sure why. I don't see any benefit to us to delete a container. I know that Docker has an integrity garbage collector which delete images that hasn't been pulled since years. So the image is automatically deleted. The thing is, I'm not sure for the low level image, what is the real threshold? Is it an absolute zero pool at all? Or if there are few pools here that are an internal Docker bot that check that could be something or the update version. I don't know the detail, but I saw some of my personal images that are really holds since the first years of Docker that disappeared. So the process is already working since at least 2020. That's why they had the API rate limit as well. So some of our images might disappear at the moment in time. That's a good question. My personal opinion is yes, we should remove these images. But that's could be a problem to send to the product manager at Docker because that's a problem they could help us. Is it possible to still have a page on the Docker hub while the image is removed from their registry? That would help saying, okay, we keep an image, a valid URL on the Docker hub. It's not indexed by the research. So you cannot search for that image anymore. But if someone has an old link pointing to that image or know how to directly reach the page, then we could say, that slave image has been deprecated since five, six years and now you have to go there. I think it's worth asking Docker product management. They might have already tools for that. And that question spans to the official Jenkins image which already have deprecated thing. I'm not sure what's the status. I'm almost sure you cannot find it by searching it now but I might be wrong. So it could be interesting to see directly because these are specific action, one-time action to be taken by Docker. Thank you, Damien. And I'm afraid this is all we had. We're already late because we started late because of me. So I guess this one will keep it for next time. Thanks a lot for your time, folks. Thanks a lot for your help. Thanks a lot for coming. We do appreciate it. The video should be available on YouTube from 24 to 48 hours from now. Have a good rest of your day.