 Hello everyone, this is the Jenkins Platform SIG meeting. Today we're on the February 14th of 2023. And today we have a Mark Wait with us, Kevin Martens, Céantan Mondal. Sorry if I'm butchering your name. And myself, Bruno Verrashten. So today in the agenda we have some open action items, of course. We also have some ongoing work and discussion. We'll talk a lot about CentOS 7, I'm afraid. And then a little slot about S390X. So let's start with the open action items. We still have, I think we'll have that for several meetings to review the platform support containers. As far as I know, there has been no progress on this subject. It's the same status done in the last meeting. So it has been done for the infrastructure. There was still one PR from a user. Yeah, sorry, go ahead Mark, you know much more than I do. Actually, I propose let's take this one out of the action items list completely. Really? And just call it that we're going to, we've got more to do, but we'll do it as we identify action items. Because I think we've got work to do, but we'll do it in pull requests. Cool, OK, there we go. Unfortunately, Damian couldn't join today, but he still has to check the Docker image, download statistics for a platform and version to help us decide if we should or not remove certain images. So it's in the backlog. Has it been done yet? I'm not so sure, Mark, after you have attended the infra meeting, can you tell us more about that? I don't know if he did it or not, but I'm recent. Well, I don't know actually. The importance of this one was brought to the forefront when we were issued a security advisory for the container images earlier this week, or maybe it was last week, that we issued a container content security advisory. Command Line Git had to be revised. And and that was I think that's the first time that we've issued an advisory for the Jenkins container images. So thanks to the security team, we update them all the time. Right. They are pretty much updated. Well, thanks to you or the other person of the group and thanks to Dependabot also, which is always proposing new changes every week. So more or less we are up to date, but this time it was not enough. Right, correct. And part of that was the platform vendors took a surprisingly long time to issue the change for their products, right? It took longer than what I would perceive as usual for the Debian project to issue their change. And Red Hat likewise was not as fast as they typically are. Okay, but now we're up to date and it's working. I saw new releases. So I guess it went up to the end and we now have a version of the Docker images without this vulnerability. Correct. Yeah. Yes. Okay, cool. Thank you, Mark. No progress as far as I know regarding the Blue Ocean container Docker images. No, but I think Damien's got the right idea here. And I put a note a little bit further down that finding a way to communicate to the users, this may be a place where we ought to do a code change to Jenkins Core that will present an administrative monitor to the user if a particular flag file is detected on the file system. For instance, then, so today I get a message that nicely pops up that says you've got a new Jenkins version available. 2.391 is available, you're running 390. That's a nice help. Well, why not a similar alert that says you are running a container image that is no longer actively supported by the Jenkins project. And then we just tell them, sorry, your container image is in fact no longer supported. And the Tombstone event would be just as suggested here, 10 seconds sleep, in addition to the 10 seconds sleep, we could write that file to that location. And then when the Jenkins controller starts, it reads it and says, ah, you're running a deprecated container. I need to tell you that. Okay, so you are reusing an existing mechanism that is already working for the update of the Jenkins. Well, and even better, we're not relying on the person who is running that container to read the log files, right? The real problem there is most of us, embarrassingly enough, don't read the log files because until there's a problem, I'm not motivated to look in the log files. Whereas this administrative monitors, I see those until I clear them anytime, I start my Jenkins controller. If that's something that could make its way back into the latest LTS, if it ever gets accepted or not. It could be, it could be back ported to LTS, absolutely. We've done that before. That would be nice, yeah. So I would get the message. I think I have a few controllers who have blue shell is still working, so. Anyhow, cool. Thank you, Mark. We still have to implement that, but whatever. Next topic, I'm afraid is still for you, Mark. CentOS 7 Jenkins controller Docker image. So while searching information with this image, Damian a few weeks ago found that Red Hat will put the CentOS 7 in maintenance update. No, sorry, it's already in maintenance update since 2020 and will be until June 2024. It's kind of complicated, but mostly with this question we had was that maybe we should just get rid of CentOS 7 because it's not really maintained anymore and it's a pain to get something recent. For example, forget, you know, it's better than I do on these versions of CentOS. Yeah, and I am, I think this is one I'm going to take as a personal Mark weight wants to do this one because, well, you've heard my sort of burning passion against CentOS 7 because of the pain that it creates. Yes, yes. So it's good when an open source contributor has a passionate thing and they use that for the benefit of the community. So in this case, my passion is, and I think what we need here is a Jenkins enhancement proposal, a JEP, that proposes to end CentOS 7 support earlier than we would normally have ended it because right now the natural tendency would be CentOS 7 support will end when the operating system vendor ceases support of it, which would be 2024, June 2024, but really they've already ceased support for the Docker container. So we would be justified in saying we're going to stop support of the container image but that would be different than most other things, right? Because our container image support usually is tied to the operating system support. We've never had a separate policy for container image support different from operating system support. Because there's that difference, it makes sense that we use a Jenkins enhancement proposal to propose an early end of life for CentOS 7 across the entire Trier Jenkins project. Basil Crow noted that he ended support for CentOS 7 in the system de installer when he implemented it. Yeah. And so we don't, it's not officially supported for system de, it's not officially supported in the container image upstream, it still works and we're not going to stop it from working, but the sooner we tell people this thing is unsupported, the healthier it will be for everyone to realize, hey, this is an unsupported thing. Now as I'm talking, I wonder, maybe this is another place. I may bring into that an idea for an administrative monitor that says toxic operating system. Toxic operating system? Yeah. Sorry, sorry. Yeah. It does say mechanism. My bias come out again, Bruno, it did. I'm afraid it has, yeah, but I totally can feel your pain. I've been there, I've done that. It was really difficult to get something as mundane as Coal running for a recent version of Coal or Git running on CentOS. I've been fighting with CentOS seven for years and now we've got replacement. Yes, so that would use the same mechanism you cited earlier, which is display something. Yeah, that would be great. Yeah. Well, and... Toxic system. Yeah. Yeah, it's a way of saying, but the reality is it needs a Jenkins enhancement proposal because this is now a broader concept than just let's do one thing, right? We've talked in Docs Office Hours about transitioning to tell people to use Java 17 to install. That's easy. That's just telling them to use something different. This is truly declaring end of life or something and it's end of life before we would normally have ended its life. Yeah. Okay. And it'll be part of that same JEP. So it's not really a separate JEP. It's just that the concept can be easily captured into the Jenkins enhancement proposal because it will propose a plan for ending support for CentOS seven already here, here, here and what's the timeline, et cetera. I haven't done the exercise of migrating from CentOS seven to something more recent, but in the same line. So I don't know how difficult it is, but we have some alternatives for people who will have to migrate from CentOS seven. Am I right? UBI nine. Yes. So UBI eight, UBI nine, Almalenix eight, Almalenix nine, Rocky Linux eight, Rocky Linux nine, Red Hat Enterprise Linux eight, Red Hat Enterprise Linux nine, Oracle Linux eight, Oracle Linux nine. We've got lots of alternatives that they can use. I think I got the message. Right. But is it easy to migrate to those distributions? Easy is a relative thing. The answer is, wow, you enjoyed the fact that for the last eight or 10 years, your base operating system was really not undergoing any fundamental changes. Well, it's a fundamental change to switch from seven to eight, right? And from seven to nine, it's a fundamental change. And there will be surprises. There will be things that they have to do that they've been building up debt. And now it is going to be time to pay down the debt. And to pay back. Yeah, right, exactly. It's just the way it is. Ton. Yeah. There will be tons of tutorials, by the way, which will have people migrating from CentOS seven to one of the distorts you just cited. So it's not Jenkins' work to do that, but people will find out and they should be. Okay, hello, Daniel. Well, and that transition off CentOS seven, they will have to do it by June of 2024, no matter what, because Red Hat is ending life for that product in June of 2024. The proposal here that I'll write as the JEP is to end Jenkins' support of that thing sooner because we don't support it with system D and we don't support the content. And the upstream does not support the container images today, but that needs a Jenkins enhancement proposal. Got it. Thanks a lot, Mark. If anybody has something to add regarding that subject, go ahead. No, okay, thank you. Last subject which was on the agenda was the IBM S390X agent maintenance and how did it go, Mark, fine, I guess? I embarrassingly undetected the absence of the agent. We never, I never received an alert. I never received any red flag. It was up every time I checked and so it seems to be just fine. Looks like a success. Yes, yeah, the IBM did a great job of performing that maintenance work. I don't know how long it was down. They estimated, they gave themselves a four or six hour window and I suspect it may have been down for four or six minutes instead of four or six hours. Wow, they know how to operate. That's cool. I'm afraid I'm at the end of the agenda. Is there any other subject one of you would like to address? None from, well, I take it back. So the reminder that April or May, Debian 12 will release. So you don't need to make any notes of it. Just a reminder for the rest of us that Debian 12 is releasing sometime here in April or May probably and that when it releases, we will change the documentation on jankins.io to recommend Java 17. Java 11 continues to be supported but because they're dropping Java 11 support from Debian 12 will switch the documentation to say use Java 17. Now the containers will continue to deliver Java 11 even on Debian simply because we continue to use Eclipse Temerun. So just be aware that the documentation will change. We're beginning the process of getting people's minds around the fact that Java 17 is also supported by Jenkins. Cool, thank you Mark. And speaking of Temerun and Java 17 and containers, I was thinking of the RISC 5, sorry, one of my hobbies. So is there a marker or something that would tell us that, yes, it's time to think of RISC 5 or it's not yet time, it's too early. Because Docker is not available directly from Docker for RISC 5 for example but you can build it by yourself and you can find a few vendors providing already Docker for RISC 5. We also can find non-official nightly builds of OpenJDK for RISC 5. So I guess we should maybe think of that when Docker is in official release, when OpenJDK is in official release or if there anything else we should think of. I think we are still building some images for, did we remove for the PPC-64 LE? We removed them in my right mind. We stopped building PPC-64 when IBM stopped giving us access to a PPC-64 machine and we had ended that support even before they stopped giving us access. We never actively built the PPC-64 container images. But RISC 5 as it becomes- Let's go ahead. So as RISC 5 becomes more mature, I think eventually we may want to support it. I like that we support ARM, right? Our ARM support is well used and well liked. We even use it actively in the Jenkins project itself. So I don't see any reason not to eventually support RISC 5. Yeah, except that we don't have any ARM-32 or ARM-64 machines to test, but it doesn't prevent us from building ARM-32 and ARM-64 machines. Am I right? Actually, I think we've been using, I regularly use ARM-64 machines from Oracle Cloud to do testing that I do. That's not provided by the Jenkins project, but just me. And I think there are some ARM-64 machines available from ci.jenkins.io as well, but they're not needed to build the container images. I'll have to go look and see. No, not. So I was just wondering if we will need to have some one-of-a-day RISC 5 machine somewhere in the R Jenkins infra, or if not, if we're just building Docker images and somebody's willing to test them. Is it okay enough, or do we have to have a real machine somewhere? So we haven't had to have a real machine to do S390 support, for instance. So I don't think we're limited by needing to have physical machines to do it. QEMU seems to be pretty good at doing the hardware emulation we need for the platforms we support, system 390 ARM and AMD-64. Okay, so I don't want to go too fast, but good indicators would maybe be official Docker release for the CPU architecture and maybe official tamering open JDK builds for the target architecture. And then maybe we could think of that. Yeah, then that would be quite easy then, yes, yep. Cool, thank you. Sorry for bringing that once again on the table. Anyhow, hey, there are any other subject one of you would like to address. So we've got Vadek and Daniel. Daniel, did you have a question? Oh, Vadek, hello. Yes, or more of a request. So recently we published a security advisory, can you hear me, Tess? We can, yes, we can, yeah. Okay, so recently we published a security advisory with the Git vulnerability and let's just say preparing that I don't think was a lot of fun for Damian or the security team. Even the, let's call it diversity of images we provide. So this has been a recurring topic for a while. Personally, I wasn't a big fan of that mostly because how long it took to generate all new images when we deliver core security updates. But when we were attempting to understand where Git is installed, which images even exist, what is their maintenance status, that was quite painful. The worst one by far is the windows control and the solar image, I think, which hasn't been updated in almost a year and has Jenkins releases with known vulnerabilities. So either we do this right and update it at the same pace as everything else or we just don't bother at all and throw it in the trash. I don't understand the current situation. And I think now I'll stop talking and let Vadek do the politicking a lot better than me. Yeah, thank you, Daniel, for the introduction, for the windows controller. If I was alone there with Daniel, I would be more proposing, could we kill everything except one to ensure that we have good enough and that's all because in terms of security, the more surface we have to cover the more difficulties. Yeah, I'm more concerned about the number of things we have there in terms of different variants, in terms of multiplication, architecture, GDK, all these kind of things, adding together in terms of complexity. Honestly, before doing the advisory, I thought it was an empty task. What is the current situation for that CV on that image? I was not expecting 12, what 20 image in total to check and this kind of thing. I was negatively surprised, I will say, at the same time, I understand that we want to add the fabric image of everyone and this kind of thing. My question, as I'm very new in this meeting, I will say, what is the interest to have multiple versions of the different image we have there? In the sense that what you want is to run Jenkins, the internal tool provided by the image for the underlying operating system does not seem to provide a huge value, at least from my point of view. So could you perhaps teach me a bit the inter-west and this kind of thing there? So Bruno, you're okay if I start talking. Oh yeah, yeah, I was about to ask you. Go ahead, thank you, Mark. Okay, so there are, well, first, to support the open question that Daniel asked, shouldn't we just drop the Windows controller image? The answer is yes, we should, right? That for me is a very simple direct answer. Clearly, it is so outdated and there has been no outcry about how outdated it is, it makes no sense for us to continue maintaining the Windows controller image. It just does not make sense. So that for me seems like, yes, that's just a good thing to say. That's akin to the same problem we have with the Blue Ocean container. We just need to declare that the container is deprecated. Blue Ocean container was an old container we used years ago in the Jenkins tutorials. We've long since stopped using that and we use the standard container. So it's a separately labeled container that just makes no sense to be touched. So first answer back to the Windows controllers, yes, we just need to drop them. For the controller, it doesn't make sense, right? It's Docker we thought might be a viable thing on Windows and it turns out it's not, it's not, it's a pretty good thing for Windows agents. It's not interesting or used by Jenkins consumers to run controllers. If it were, they would have been shouting and outraged at how outdated that container image is. So first question answered, second question then was, hey, why do we have all these container images? Two general profiles for users on the controller side. One is the user who says, I want something that looks an awful lot like the server operating system I was using before. That's the Debian containers and the UBI containers. They look an awful lot like a server operating system and therefore their tools that would have run on a server operating system behave like they would have expected. Then there's the, I want the narrowest profile possible crowd and for them that's the Alpine image. So those two concepts were the reason why we have Debian and UBI nine on the one side and or Debian and UBI on the one side and on the other side Alpine. So what about Debian Slim? Yeah, good question. And that was an attempt to give people something smaller in image size, but I don't know that there's, I don't know that there's compelling benefit to do that. So it's a good question. And now do Alma Linux, that was a pull request basically justified by, yeah, we're not, we don't know where Sendos is going. So here's a pull request and someone ended up merging it. And for something that we're expected as the Jenkins project to maintain pretty much indefinitely, that's not how I would like to see a better justification for a new image there. Then maybe the thing we're currently doing isn't going to be around for more than a year or so. So let's just preemptively create a new thing. That seems kind of weak if I understood the justification correctly. Good point, absolutely. And your point is completely valid, certainly. Alma Linux was added in a season when we weren't sure where things were going. And I'm not sure that Alma Linux adds anything over UBI-8 or UBI-9, right? Okay, they're both available for consumers who want to run a container that is based on the core technology that underlies Red Hat and Red Hat Enterprise Linux. And it's a valid point to say, hey, should we end this one? So yeah, let's take it as a topic for discussion here because I think we've got an overarching challenge. We need to have a way to notify users that their container image is no longer supported. And the suggestion we just discussed earlier in the meeting was possibly add an administrative monitor that looks for a flag file and says, hey, if we detect this flag file in your container, we're going to tell you with an admin monitor that your container is no longer supported. That kind of idea. That way we can end life on these things and do a reasonable way of informing people that we're ending life. Go ahead, Daniel, sorry. Thank you, Mark. No, Daniel, I was just wondering what you and Vadek are thinking about the idea of using Wolfie. Like adding another new version. I know that there are a number of additional libraries and so reduction of CVE there. We had a discussion through Sacha at some point with former B, I don't remember his name, one of the gems, rolling, I think. Yeah, the approach is interesting. Now, what I'm concerned about is more the fact that we are adding new favor every time there is a new trend in Phoenix and this kind of thing. The approach seems to be interesting for sure. But if it's not adopted enough, we will support something for, I don't know, 1% of the user, which is not what I would like in essence. And that's very related to my next question. Due to the discussion we had with, I think it was Hervé and Damien, I don't know if you were involved, Bruno, as well during the distribution topic. So I asked a stupid question to Damien and Hervé. What are the images that are used nowadays across the different things that we are providing? And my suspicion is that you want to install Jenkins, you are picking the first image that you discover, oh, it's working, okay, I'm taking that one. So please correct me if I'm wrong. Is there more incentive like what you mentioned Mark before? It seemed to be very interesting, but is it the thing that is covering the majority of the user or just the expert user that exactly know what they want? I'm asking the question in particular because if date of Jenkins, it's a very touchy topic and changing the career image, it's even worse in a sense that most of the people does not know what they are working on in terms of Jenkins. So that's a bit my hypothesis at the moment. Yes, hypothesis Mark has another one regarding people prefer for existing distributions that are already used in their day-to-day work. The thing is there is a subject that is not an action item which is we'd like to have, I don't know where it's, but we'd like to have the statistics about the use of the different images so that we know which images are still relevant and which one we could get rid of because almost nobody uses them. For the time being, we don't have the data to say that your point is valid or invalid or my point, whatever. We don't have the data, am I right Mark? I don't have the data, we don't have the data, download data, but it is available. We've just got to have an org admin who extracts it and Damian's got the permission to extract it as far as I know. Yeah, I'm especially asking a question because if send me the data, I do a quick analysis on it and it was mainly, there was a lot of tags and a lot of them are used meaning that it was props five to 10% of everything. It was not, there was one tag with 99% of the usage but it was more distributed across everything that is available. So that's why I'm asking the question naively, is it because of random choice or there is a real rational behind it, it's kind of thinking. I'm not saying I think I'm true or wrong. I think I'm wrong, but I'm just asking the question to ensure we are all aligning that. I don't have the answer Mark. Yeah, and so I have hypothesis but I don't have any way to back the theory that I assume that people choose containers based on their past experience. Otherwise, why would anyone in their right mind choose the CentOS 7 container? Because it's a, so I think it's that they're biased towards choosing one whose name they recognize but I don't know how to even validate that assertion. Even given that, I think it is perfectly reasonable for us to declare that we want to reduce the number of containers that we support because they are a liability in the sense that we need to support them. We need to keep maintaining them. When you choose the name of the, yeah, sorry, I go ahead. The default tag that we use is latest maybe or GDK 11 or GDK 17, I think it's Debian. And I'm surprised that it's evenly distributed 10% or so per tag because when people, if people were to make a choice, which is not a choice, you know, the default choice, everybody would be using Debian or latest or GDK 11 or 17. And that's not the case. So it's pretty interesting to me that the latest was the only one above, but largely above, but that's the only one that was different from the others in terms of statistics. But I did a five minute look. Do not take my analysis to be a very deep analysis and just kind of think, but yeah, the latest was the most popular one. And my question there is many, the latest does not explain it's Debian, it's UBI behind, it's just latest. So that was my decision that people are using. Oh, I just want Jenkins colon latest. I don't care what is behind, I trust the project to provide me what is the best suited image or this kind of thing. It's not my task. I'm just want to run Jenkins. And now it's mainly there. If you specify the tag, why do you specify the tag? Is it preference? Is it because you better know that distribution is kind of thing? So yeah. So quick to give you more color on the situation there. At codebiz, we are receiving a lot of customer reports that are mainly security scanner. They are running on their own docker image that we provide. And often they are just telling us, oh, there was a lot of CVEs and this kind of thing. And what they would like for some is to be able to build their own image based on a golden image from the enterprise. Meaning that if the security team pre-approved the Debian image, the base image, or the UBI image, they are building everything else on top of that image. So one of the things that we were discussing internally was to provide this kind of support. And I'm just wondering if it's something that could be also applied to the community in the sense that instead of providing multiple docker image in general, we are just providing one that is the official one with code and some instruction, I think already available in the website. If you want to build your own image based on UBI, based on CentOS, anything you want, you can just use that docker file. But we are not providing ourselves the image. So we've used a technique somewhat like that actually in creating our base images. And I found the technique rather complicated. It works now. It took us some time to adapt to that method. So what we're doing is in order to bring in Eclipse Temeron into certain of our container images, we do a from statement that uses a container image they provide. And that has to be either an Alpine container image for the Alpine side or a CentOS for the CentOS side. Or in our case, I think we use their Ubuntu image as the base or as the not base as the contributor of JDK. So the technique you're describing is one that is used and we use it as a project as well. I found that the usage of it rather complicated. And if part of me wonders, I'm not sure I'm ready to sign up the rest of Jenkins users to take that approach considering how simple the current approach is. It's worth discussion though. Yeah. It's really open to discussion. I'm on the same boat as you Mark. But maybe we could have an official image tagged by the security team and have some documentation that the other ones are not as secure as the official one that has been reviewed or provided by the security team. Maybe. Well, or to maybe to take it today when we in order to package Eclipse Temeron, we do it with a from statement in a Docker container that is after the original from statement. So there's a second from statement that we use. We could conceivably do the same thing with Jenkins for each of our containers. Therefore, what I think Vadek just proposed is add one more container. Now, Vadek, are you sure you meant it? Add one more container. That's what I heard. Add one more container. But that container would only contain Jenkins. It would not actually contain an operating system at all. Or if it did, we would advise people not to use its operating system, something like that. No, no. Let me refresh the proposal. Not the proposal. I would say the topic discussion trigger, but nothing else. So we are providing a Docker file in the project that you can customize as you want. If you want to use the API and UBI or this kind of thing. But we are providing only one image that is built officially by the project. And the rest are no longer provided as is. Meaning that if you want really to have the UBI one, you have to build your own image. Otherwise, you are using the latest that is Debian with that version, that support, and this kind of thing. That's just one way to reduce the support. And if someone really want to have Ubuntu, they are able to do that. But it's not at the cost of the project, but at their own cost. With that in mind, my hypothesis is that if you have a slight preference for UBI, you will not do that building process yourself. You will accept to use the Debian version. If you have a strong requirement by your company, by your anything environmental thing and this kind of thing, you will build it yourself. That's a bit my hypothesis. So I'm not an expert in that domain at all. I'm just giving you some information about what I'm seeing with other companies, enterprise that are sometimes very regulated and this kind of thing. When having a single high CVE in the image is blocking the production deployment, it's pretty interesting to look at and to see what could be done there for assistance in general. Thanks. So I think... Sorry. Sorry, I finished what you were saying about like I interrupted. Was that description more clear about what I thought in terms of idea there? It was, thank you. And I think it fits well into the general discussion that's been ongoing about how do we reshape the current poorly structured on the agent side, poorly structured build processes and consider should we do something different? Can we use what we learned from that restructuring to then also restructure the controller processes so that they give the added flexibility that you were describing. I want to build it entirely myself. Then we would be teaching people, look, here's how if you want to build it yourself, do it this way. If you just want to take what we've got, we use the build it yourself instructions and we did it over the top of Debian for this one or we did it over the top of UBI for this one. Yeah. And perhaps just to do a quick parallel there, if you look at the Jenkins.io slash download page, we have some official image and we have some first party provided package, image and this kind of thing. So that could be a way to also let people have access to their favorite image by using this kind of way. Oh, I'm running that pipeline for the Jenkins project but I'm not the Jenkins project, this kind of thing. I just want to say the example, it's just an idea like this. I'm not saying it's what we need to do, but you can see how it is. Certainly there are users like Gavin Mogan, Sam Gleske, me who provide source code for our Docker containers, right? And our container definitions are the ones we happen to be using and they're publicly visible. So good insight that maybe we want to introduce the concept of official support and references to unofficial. And then if you want to register as an unofficial, here's a place to do it, yeah, interesting. Yeah, so maybe we just got that proposal further. Yeah, totally. Yeah, go ahead. Quick note there, if I'm there to explain a bit what I have in mind and this kind of thing, it's because behind, if there is a CVE, something that is impacting, it's us, mainly the security team having to work on this kind of topic, pretty heavily in urgency and this kind of thing. If we have something that is easier to manage, it's a better pleasure to work on this kind of thing, I would say. It's really about that aspect that is perhaps a bit selfish there in thinking about myself and my team, my team and myself in that way. So yeah, it's just this kind of thing that I'm trying to push a victim. Yes, thinking about the word emergency, is there anything that we could do to automate something so that it doesn't have to be an emergency? Should we provide something to make some test for CVEs ahead of time so that you don't have to rush to do something? Well, but so I would counter. I'm going to frame a different question. I think that the best thing we can do is provide easy, lightweight, simple, rapid build processes because the CVEs themselves, I've considered them wildly unpredictable, which area will get a CVE and which won't. Git was the most recent, but it could be Curl, it could be the C Library, it could be all sorts of things. And for me, that's so wide, but rapid, effective build processes make it easier for security and easier for the rest of us. Sorry, Nelvada, can you get to answer? There is another dimension to that. I'm following very strictly what Red Hat is providing for UBI because we are providing that official image at CVEs. And due to that, I'm following not every advisory they are doing, but a lot of them. It's between one and 10 per day in terms of CVE. So we have a lot of CVE nowadays in every image that you can see in our Docker for sure. Are they are really impacting the product, the application that is running on top of them? Most of them, no. So that's why we have 99% of the CVE are not impacting. If you have an issue with GCC, yeah, fine. Are you compiling anything in your image? No, okay, so you are not impacted. It's not that easy, but that's a bit the idea behind it. Sometimes we have, I will say, some key library that we are looking at, for example, OpenSSH, that is usually an impacting one most of the time because we rely on that indirectly through the GDK and other things. Typically, it could be depending of the script that we are using and this kind of thing, most of the time it's not, but it's all this kind of assessment that is first important to do at some point, not necessary for every library, but it's part of the job that we are doing and it's also in term of correction. We have perhaps seen that with the give vulnerability as a very good example. I think Red Hat was perhaps the last one to provide a fix, compared to other that were providing very quickly the update. I don't remember, Marc, you provided the information like Alpine provided the fix very quickly or Debian, one of the two only. The other one was waiting perhaps multiple days. It means that if we are assessed one CVE, the impact is, I will say assessed and we are impacted, we have to wait for these providers to provide us the correction. If we are supporting 10 different providers, one of them will provide perhaps no correction at all. Example, SanctoS. So in this case, do we wait for the old, the image to be covered corrected, or do we just publish an advisory with, oh, Debian is corrected, Red Hat is corrected, but for the rest, we advise you to migrate to one that is corrected, which is not really sustainable as well. So it's also why I'm trying to push a bit for the smaller amount of image in total. So it's all the hidden part of security with CVE and this kind of thing that is a natural paradise, I will say. Yeah, I get your point, but from the user and user point of view, a few months ago, we tried to remove Curd, for example, and users were mad at us because they couldn't use Curd anymore. We told them, yeah, that's not a problem. Go ahead and build your own image. And now, no, I won't, that's not possible. And removing all the images and just letting one, I think they won't be fine with that. But we'll see, we have to discuss that. I totally got your point about the security. It totally makes sense to me. I do understand the level of pressure you have correcting all those images and waiting for the official provider to give the right corrections on. I got it, that makes sense, but we'll have to find a compromise, I guess. We'll see. Okay, so no disagreement on, let's get rid of the Windows controller and no disagreement on, let's get rid of the Blue Ocean container image, right? That those two are longstanding action items for us. I do think, and we want to restructure the agents so that they are sanely buildable instead of the current very complicated thing that Damien just does not enjoy at all. And if Damien, the one who's building it does not enjoy it, I can imagine the security team's experience is terrible. So, understood. Also perhaps, something that is more and more often seen in the corporate environment is that the companies want to have their own image building in their own pipeline. In the sense they have some license for some security scan and they want to build that directly inside the pipeline instead of relying on the Docker image provided by someone else, they are scanning afterwards. So it's also part of the shift left approach in terms of security for Docker. So building your own stuff, it's also, I don't want to say a new trend there, but it's growing slowly, I will say, no worries. So are you okay if I ask a question on that one, Vadek? Because your experience may help me there, understand. So is that pattern that they are saying they go back to base operating system that they've chosen and they install exactly the packages they need, they may model it after our Docker file that we use to construct the 2.375.3 JDK17 image, but they do exactly all the steps themselves, they don't rely on any from the project that they're doing, rather they download the war file and use it inside there. I would say yes, no, yes, no, it depends. So the idea is that depending of the context, some customer, user, corporate or installer in general are just redoing everything from scratch because they do not trust anything. Orders are just building the stuff like us, but in their own pipeline so that they can trust the scanner, their scanner, ideally. So it really depends on the level of trust they want to have on third party. If they do not trust what we are building on Jenkins.io, they can rebuild everything from scratch, the war, the HPI, all this kind of thing. I'm not sure it's a good approach because it's an open source project, but for the Docker image, it could be perhaps easier for them to do that. It's also a matter of building everything you want yourself in the Docker image, removing some of the stuff, reducing the size of the image to reduce the surface of attack. There was also some regulated environment where they want to put additional things inside the image. Typically, some notes, some text files, that stupid as that, just some text files saying that's the property of that company. That is a requirement by that company to have this kind of thing on the cloud that this kind of thing. I hope it's just in the case some security hackers that are white hats are coming there, discover the things. Oh, it belongs to that company. I can reach out to them. Instead of just thinking, it's a regular instance of Jenkins without any other information. There was a lot of things they are doing in addition to what we are building. I cannot tell you it's only one thing, like they are rebuilding everything from scratch. It's really depend of the... I will say the regulated level of their environment in general. Thank you. Okay, thanks. So the scenarios you described, I believe all or almost all of them are already attainable today because I can do that already today. But what I heard, I think, is that it varies from one entity to another, whether or not they rebuild the thing entirely themselves or at what level do they rely on it? If they say, I want everything verified, they may rebuild from base principles, right? From source code. Okay, thanks. I think it's also depend of the size of the team or company that is using the docket image at the end. If you're alone working on your Jenkins, I think you do not care to rebuild everything from scratch. If you have a team of 10, 20% I don't think it's still valuable. But then if you have hundreds of developers using the image, throughout you prefer to have your own version. Thank you. Okay. That covered the topics I had. Badek, Bruno, any others that we need to discuss? I'm gone. So one comment on what you mentioned, people were very upset about the removal of... Yeah. Do we know their use cases? Yeah, they described them in their complaints and they were use cases that were, in their case, to their mind valid. They didn't want to rebuild the image themselves. They wanted the image to have that fundamental tool that it had before. So for them it was a loss of functionality. So where I'm going is currently we're collecting telemetry about distributed builds configurations and around, so spoiler alert, around three quarters of instances neither have agents nor clouds configured. So what I expect is happening here is people use the simple most setup and expect the basic OS tools to be installed to support their builds when what they really should be doing is set up distributed builds and they just don't bother. And for me, if you're in that situation you should configure Jenkins differently. Jenkins tells you to do it differently. It is not unlike many years ago when we attempted to limit the permissions for the Jenkins home directory to the Jenkins user itself. Basically 7.0.0 in change mode numbers, octets. And use cases were described where they said, well, I have a build running on the controller and I just copy artifacts in a shell build step directly from the Jenkins home directory. And at the time I don't think I was a core maintainer yet and I sort of accepted that. But today I would just tell them to do it differently because that's not how it's supposed to work. And so perhaps a better understanding of people's use cases would be useful here for us to identify whether we as a project even would recognize them as valid or whether we should offer alternatives like setting up distributed builds and pointing out to people if they do anything non-trivial building on the controller is a really, really bad idea. Good point, well taken. And just in case, curl definitely should remain on the agent image. Yes, yes, no question. And we left it on the agent image. Absolutely, it was still on the agent image. Yeah, that was no question there. That's like, okay, my passion, Git and Git LFS should be on the controller and on the agent. It belongs in both places, we need it. It's just too central to our operations to not have it. Yeah. So perhaps just to apologize about the topic, the surprise topic at the end, we double the length of the meeting due to us or sorry for the intrusion. No, that's okay. We have a proposed agenda, but more topic, a surprise topic are welcome, I think. Yeah, thanks for coming. Thank you. And I think that's a wrap up. Yeah, thanks a lot all for coming. The recording should be available from 24 to 48 hours. Yeah, difficult to say. And see you in two weeks from now. Have a good rest of the day. Bye. Thank you, bye.