 Hello everyone, welcome to the Jenkins Infrastructure Weekly Team. So today, we are the 26th of July. And we have your servitor, Damian Duportal, we have Stefan Merle, Erwe Le Meur, Brunover Arten, and Mark White. Let's get started with the announcement, weekly 2.361 on a succesfully released, at least the wire on the docker image. We have built our custom image, so it should be available in infrasci and weekly CI in the upcoming hour. So I assume the usual release check need to be done in the upcoming hours as well, so it will be considered finished only in a few hours after the checklist was done by the person in charge of the release. Is that correct? That's correct, yeah. So the weekly release, you're right, docker image is done. I haven't checked that the job completed successfully, but other things look just fine. Nice. Second announcement, just a reminder that for the north hemisphere, this is summer, which means a lot of people are going on holidays. We'll be off starting tomorrow included for eight days. Next week, Hervé is going for two weeks in holidays. And I assume that Stefan, you are going in holidays in three weeks, which mean during the month of August, some of us are going to be there or not there, depends on the weeks. Mark, I assume you might have some holidays as well. Yeah, I'm at a conference Friday, Saturday, Sunday of this week, so yes, lots of busy coming and going. So Hervé, Stefan, and I have added our unavailability in the Jenkins Infra team calendar. That's a private calendar shared by the team where we are off the events, such as renewing credentials and people off. We could choose another location, but for this month, you have the information for members of the team. And for anyone else, that means we will have limited availability. Our bandwidth will be decreased because we won't be full team available. So we'll try our best to fulfill requests, but expect some delays. Hello, Daniel. Hello. Do you have the link to the notes if you want to follow up? Take some. Are you asking me? I'm adding back in the chat if you want to get the interactive notes. Everyone should be able to use it. It requires only authorization. I'm not sure if it works for you, but you can at least read the notes. Yep, you're sharing your screen anyway, so it's good. Thank you. Jenkins LTS Release Candidates schedule tomorrow. That's the next announcement, Marc. Correct. So Alex Brandes has volunteered to be release lead for that release. Kevin Martins is assembling the changelog. LTS backwards relatively few and Alex has confirmed that the Windows installer change that intentionally forbids Java 8 will not be used for Jenkins 2.346.3. It is currently being used for weeklies, so that's a good sign. OK, I've added it. In that part, next week, we should have a weekly. Is that correct? Correct. That one will be for Stefan just because of the calendar because of both survey and high, he won't be there. Nothing expected as usual. That should work at least the 10 past work without any issue. If there are any issues as usual, you have to read the logs or ask Marc and team for help. I will. Next Tuesday. Hum. Oh, oh, and one is LTS. LTS baseline selection is due tomorrow as well for the next release. So. LTS baseline selection for next baseline is scheduled for tomorrow. That doesn't affect the infra team, but be aware. OK. Regarding the LTS, his candidate, is there any active participation of the infra team that is required? No, not as far as I know. OK. Which mean tomorrow, try to avoid merging things on Kubernetes and Poupet. So the controller won't restart and people will be able to do the LTS release candidates. Sounds good for everyone. Good. Good. OK. Anything else before we proceed to the operational concerns? None for me. One, two, three. OK, let's go. Just a cover of the task that we were able to completely finish during this iteration. So there has been the installation of the GitHub command ops. I understand that's a work by team and Hervé at least. It appeared that there have been some rush on installing that, as I understood. So it has been disabled except for at least one or two repository test repositories on the Jenkins CI organization. Is my understanding correct? That's my understanding as well. And when I tried to use it on Jenkins.io, it did not seem to work any longer. So I think I've confirmed it's disabled. It's enabled on core on Jenkins.io test plan for the Jenkins.io organization. OK, is it available on Jenkins.io repos organization? Yes. OK, that one is also scaring me. Cool. So your mission, Hervé, will be to assess which repository on that organization it has been installed to because we have to check. I can disable it on Jenkins.io It's not about enabling or disabling is what problem do we want to solve for which repository and then security assessment because for instance, I don't want anyone to use it for Poupet, but I can understand it's needed for Jenkins.io. So we need an exhaustive list of the repository where it should be installed. So yes, maybe can we disable it on Jenkins and Hervé then assess where is it needed because no problem that it can be needed on some use case but we need an exhaustive list of where is it needed. The interesting case for on Jenkins and Hervé was to be able to to ask for a viewer in any repository as not so many people are right to do so. I'm not sure to see what problem that it solved for the context of the Jenkins infra repositories. Repository permission operator. I think Tim spoke about this case. I disabled it and we will gather all the all the use case. Don't get me wrong, I think that's really a really useful tool is just that for some repositories it's useful and for some it's dangerous. And since we have both part of the equation we need just to list where it will be useful without any risk. That's the only thing I ask but I don't mind having it on the system. I'm not speaking for Jenkins.io organization. I'm not admin of this one and it's not the area of the infra team. But yes, thanks for that because it can it can create new way for us to manage and improve our management. The chat ops is will really be useful. I'm taking the issue on the order of the notes. There has been someone that wasn't able to connect to repo.j. So it was due to an invalid email on accounts Jenkins.io. So thanks everyone for solving that. Not a lot to say about this one. We have synthetic monitoring on artifactory login thanks survey. So that was a long running topic that Olivier started months even years ago. So we are now checking the full process of monitoring with Datadog if we can log in on artifactory using technical account on the LDAP. So when there are issues we can immediately identify if the issue is because of artifactory itself being unavailable or if it's an issue with the LDAP. So artifactory is fine but when you try to contact the LDAP to do the login then there is an issue. So thanks for that work. I don't know if there are any questions. Something to note. Next one then. Disable all anti-spam protection on cert.ci So with the work that Hervé did recently on the Poupet templating to allow cert.ci to have some exception to diverge from the CI Jenkins.io usual config that has the advantage of closing that old issue that was requested by the security team. I think there are one or two more that we could be able to treat with the recent Poupet fixes. You're welcome, Daniel. If there are some that are still annoying you don't hesitate to ping us on this one. I think between the set of plugins and this particular Apache rule stuff I think those were the big ones. Thank you for cleaning those up. No problem. We have grown on the Poupet area a lot during the past week so we are clearly more efficient to treat such elements. So if there are some that are dormant we could totally do it now. We have the knowledge expertise and we should have bandwidth after the holidays. Meta ticket to remove Jenkins week exporter. Thanks also Hervé and gave in on that area and everyone. So we were able to disable remove deletes that's old service from the cluster. It's now managed by a set of reports on the static plugins websites gave indeed a lot of the heavy work there and Hervé finished and cleaned up everything. Any question there? Nope. And finally, that adult doesn't detect when the account app is down so it has been fixed by Hervé as well. I assume it's related to the synthetic monitoring as well. I don't remember exactly that one. Hervé was. Yes, it's the same as architecture. This test is now trying to log it in. The account as well. And same for architecture with a new technical user data monitoring. Cool, right? Thanks a lot. Do you have any question on these tasks or can we proceed to the currently working progress tasks? OK. I'm taking following the notes. Permission issue and plugin release with CD workflow. So Adrien Le Charpentier has raised an issue is not able to use the CD process to release his plugin. Not really sure where the where the issues lies. Sounds like the user and password are OK, but is denied to push the new plugin. I've checked and mentioned on the issue that trusted CI didn't have an issue. The RPU is working as expected. Adrien is part of the maintainer. So I'm not really sure how to diagnose further because I don't personally don't have admin rights on the artifactory. So I'm not sure how any of this work I've mentioned team and and people from the security team as well. And we need help because we don't know the CD process really details and we don't have the tools on artifactory. So for what it's worth, artifactory is acting up today. OK. While we were trying. So this is something I first discovered like two security advisories ago or something. I can create a new private staging report for security fixes in artifactory and for several hours afterwards, artifactory isn't quite sure whether that repository exists or not. Which is a problem, especially if stuff is already station. We expect it to remain stage and suddenly artifactory claims it has no knowledge of anything related to the stage content. So this might just be general artifactory shenanigans that aren't even on our side. So I just looked and the RPU regular job, which is running every three hours, I believe, is running and the tokens that we attach to GitHub repositories for use by the CD GitHub action are valid for, I believe, five hours. So unless the CD process takes several hours, which the failed one did not, there should be valid credentials. So I have no idea what's going on here. But as I said, we also experience other problems with artifactory that are fairly difficult to pin down. OK. Thanks for confirming and giving us details. What do you think if we just keep waiting one or two days before being checking again or at least waiting the five hours for the new token to be to be released? I mean, there is a new token. The issue was filed five hours ago and there's a token on the report that's two hours old. OK. Sure, we ask Adrien to retry now with the new token. Does that make sense? Yes. Or we can just do it ourselves. So ideally, Adrien is reaching out to me when he's doing that. The problem we have in artifactory is that I think we have logs for two minutes or something. They have insanely aggressive log rotation. And so if you tell me 20 minutes after the fact that something went wrong, I know nothing about it. OK. So should we let Adrien synchronise with you then in order to access the logs because I... That probably makes sense. I think you are one of the only person with access to these logs. So that's why otherwise we will volunteer to help. But yeah. There are Vadek, KK, Tyler and Olivier. So yeah, it's basically me and maybe Vadek. Well, that makes... Do you think that could make sense if I request to access once I'm back from early days with the role of in France online? Definitely. You only need to remember one thing, never delete an artifact, ever. Yeah, I will go for during the briefing once I will request. Because yeah, that's a good point. I need to be briefed on what to do and not to do. The reason I'm asking is to be able to only monitor or observe the logs or see if we have metrics about the request or size or see what tools do we have without having to ask Chief Rock for that. OK, so let tell Adrien to see with you. Check with Daniel. Many thanks, Daniel. Next issue is CI Jenkins I Your Permission for Contributor. So that has been opened by Joseph Peterson. I've redirected him to the mailing list because that will involve giving him some kind of privilege permissions on CI Jenkins I Your given Joseph is helping the infra team that will make sense from a pure infrastructure point of view. That's why I said as infrastructure, I don't see any issue on that. However, that could have some side effect that I haven't talked about. That's why I would prefer moving that to the mailing list. So we have board members co-contributors and people with the knowledge to confirm on or in firm, especially that could be solution to solve this problem for the bill rebuild without requiring this permission, but none that I'm aware of. The use case isn't quite clear to me because. The infra library should be such that. The all pipelines would be trivial. No, I mean, even if he's updating the Jenkins file, except how much can possibly go wrong there? You can't do anything. She can change the Jenkins file. Isn't it? Well, it's running in the sandbox. So it's not like it's on CI. Right, but the point is usually the Jenkins files are maintained by the plug-in maintainers and for Jenkins file, changes to be effective. The one who pushes the commits needs to have right access on the repository. Right, that's the security model. Like if you open the pull request, you need to be a committer to the repository for your changes to the repository. Depository for your changes to be effective. Otherwise, we ignore your changes to the pipeline. And then. Yeah, that's a sitting on the repository. Yeah, just. Right, users with admin or write permission, this one. Exactly. Right, so the use case isn't clear to me. So even if he wants to propose pull requests that gets pipeline changes, I guess he wants a passing PR build with changes where the pipeline gets adjusted at the same time as project settings, but he wouldn't even get that really if he just plays replay. So it's not clear what the use case is. And if he's a contributed to a specific component like the bomb, then he can just commit his changes to an origin branch rather than a branch on the offer fork, for example. So the use case is not clear to me here. For me, I understand it that you want to test the Jenkins file pull request changes. And in that case, either you are admin writes on the repository. But if you are admin of the Jenkins controller, you can trigger builds that would be you will be seen as trusted. That's the only exception of the trust model there. No, he's specifically asking for the rebuild permission of pipelines. He doesn't even need admin permissions and. So he can still file the pull request. It's just that the pull request would ignore the Jenkins file changes and use the Jenkins file of the main branch. But once a maintainer merges the changes to the Jenkins file, then the CI build of the default branch would pick up those changes. So he wants kind of wants fake CI builds that are manually modified. But what and how is that useful is my question? Because the thing is that you could break the builds for if the approver read the Jenkins file, but don't understand fully the consequence of the change and merge it because the green will be built with the Jenkins file with the old version and not the new one of the pull request. Once merge, it could break everything and you will need to do subsequent pull requests, meaning the don't look. That's what I meant at the beginning. I would expect the pipeline library to be easy enough to use that you don't really need CI for changes to the pipeline because there shouldn't be any options that are required that needs CI other than perhaps can dust this plug in and build on Windows or not. I strongly disagree on that that's Russian. It's not it's a code. It's groovy. So that means that it's not possible to have it easily working for me. It's not possible at all. The pull request triggering this issue is on the variant plugin where the Jenkins version was in the Jenkins file and it's the part of the Jenkins file that you wanted to replace to change. For the Bum? So the variant plugin. So I felt a bit in by replaying is built with the Jenkins file change. OK. I feel like that we don't have an answer right now that should be discussed on the mailing list. Not that I don't want to have that discussion right now that's really instructive. But yeah, it's not clear the real needs there. I agree with Daniel. But I don't see an easy solution. For me, it's it's a block a blocker point of the trust model. The trust model is required and we need it. But we don't have an easy solution for that. I'm not sure. And I agree with Daniel. I'm not sure. There is really if the need is really yeah. That important. I don't know. But yeah, let's discuss it in the mailing list. Or. So what? I assume it's that one. Try to change the Jenkins file. If I'm so you wanted to test that change. OK, let's discuss it because it's not clear. And even if that he's a maintainer of at least one Jenkins plug-in, which means he can push whichever branches of the subject plug-in to his report at any other branch ref and then just do his changes and validate them there. He doesn't need commit access to the specific plug-in report to test changes to the pipeline. I'm not sure. I understand. So get as a distributed version control system. You can push branches of one repo to another repo. Yes. So he's the maintainer of at least one Jenkins plug-in. Let's call it the Jettison plug-in. Oh, that's tricky. So so he can push the master branch of variant plug-in to the master branch of Jettison plug-in and see what happens to the CI built there. And since he's committed to that repository, it works. This is the only thing that will not work. Hopefully, I really hope it will not work is the incremental deployment because that would be really bad because we should I think we have a check there that this doesn't happen because it's kind of a fake incremental. But other than that, he should get his CI result. Is this something that we could write down on the developer guides on at least on the issue? Do you know it's it's absolutely insane, but Jettison is not, you know, just some random contributor. He's working on fairly central components, and it could be a workaround to make his stuff work. Do you mind communicating that we made there on that? I think the issue should be OK if we close it afterwards. Or do you feel keeping that private? I can I can write something up and maybe I misunderstood the use case anyway. So that's also another possibility. Cool. But that's also good to know thanks for sharing the tricks with us could be useful for us from time to time. I object. I object if we ever use that. I think I'm delighted that a security person joined us and shared something that is really a novel technique copying repository A into repository B sounds like a horrible thing to do to that repository. But yes, it is allowed. I mean, I'm thoroughly impressed. That's a brilliant technique. For me, it's also an indicator that if Jenkins pipeline would be portable, then we wouldn't have that issue because anyone will be able to spin up quickly a Jenkins instance, run it to test the change and then move it. That's not the reality of Jenkins pipelines because you need so much plugins and configuration and stuff that it's not possible. So let's see that as a hack but an indicator that we need to improve the software. Thanks, Daniel. Are there any question points on this one? OK, let's just this one. That was the sorry the issue open by. No, let's move on. Next topic. Wound container raised Java 17 Windows agents. I've added that to the release. So Airways currently working on using Packer to not only build the virtual machine template for the agent use and see agent in Sayo and other controller. But now we want to start using the same image and same contents also for container. So whether you are running on a Kubernetes agents or a virtual machine on EC2, you would have the same tools with the same version on the same location with the same behavior. So that one being able to provide GDK 17 Windows agent is part of that area. The reason is because on a virtual machine, we already have the free GDK installed and maintained for virtual machines. So instead of having to deal with the world dog Jenkins CI Docker Windows agent, is it core on nano server or whatever 2019 that's nightmare. So let's focus on the infra only. So we have our own dependency and with that work, we should be able to have one location to change the free version and then it work for Windows and Linux the same way. So since it's that part of the topic, I've mentioned that work with a link on the issue and I put it because it's work in progress. I won't put that on the two next iteration because Airway will be gone and Linux will be a good great step. But we have to keep that in mind because Basile and other contributor need Windows GDK 17. So that should be really important for us to finish before end of August. So they would have the infrastructure to continue their work. Is that clear? Do you have any other question? Put on hold. Next one is access to npm namespace. So Airway to cover that one. If you don't have time, I will take over. Gave in will want to publish npm package. Sounds like we need to contact Tom Fenley, at least for one of the npm namespaces for Jenkins. I have no idea of all of this work, but we need to take over at least one because none of them are managed by us as far as I understand. And I assume that gave in checked with the board. There has been someone who is using Jenkins, the string Jenkins namespace reference on gave in issue. I'm not sure if it's do you think Mark, we could contact npm and say, hey, hello, we are the Jenkins in fra and we can prove prove it by I don't know DNS record or whatever, but prove that we are Jenkins and we will want to take this one because there is nothing published on the data account. No objections from me. I'm just asking question because I don't know all any of this work. So OK. Is it OK? Are they if you synchronized with gave in on that area? And I will and Stefan and I will take over next week if you didn't had any answer. But the goal will be to contact npm repo for for checking this. I don't think there should be an issue with Tom. We just have to get his current email but by LinkedIn or Twitter that should be easy if we don't already have it on the mailing list. Jenkins squatted Jenkins CD. I have no idea but I don't care. And Jenkins I.O. We could claim it as well. Sounds good. Don't don't forget to write a run books once we have access to that. A way to follow up. Next one infrasier logs are mentioning expired data dog API key. So the goal is to have the private controller to send metrics to collect metrics to data dog, which is a private system by default. Are they so that we the plug-in could be either configured to directly send metrics to data dog which could slow down Jenkins instances. So instead you configure the controller and the data dog plug-in for Jenkins to send metrics to the local data dog agent in Kubernetes, which then will take the performance burden and send the metrics. So sounds like the configuration went well but you mentioned earlier today RV that you saw word behavior on the private dashboard with the metrics from that cluster. So you assume that some diagnosis work need to be done on that one. Is that correct? OK, so it's only for infrasier. We haven't fixed that behavior on the other controllers. There will be the next but let's fix this one. The subtlety error is that infrasier is running on another Kubernetes cluster than release CI, which means we add to first install data dog. So we have to control that data dog is working for the whole cluster before being able to be sure that it's an issue with controller or wheeler. So we to be diagnosed. The goal for us is to have metrics for infrasier before being able to move release CI on that private cluster and with the same menu. Next issue, unless there is a question about that metric for infrasier. Poupette upgrade campaign to latest 6.x. So I've covered. I'm ready to update to the latest version the Poupette master. Agents are all fine. We are able to test it with vagrants. We are removed most of the security alert on the GitHub public repo. Everything should be fine. They haven't seen any blocker. So now I'm holding on that one. I will finish it after my holidays. But that should be only a few minor version jumps. It's just that Poupette is a mess. The version doesn't map at all. It's clearly worse than the Jenkins docker images in term of version scheme. And that's not a compliment to ourselves from me. But everything is for many things. Stefan and Arvet for helping me on all these areas on testing that. We are improving our Poupette knowledge there. Just a note, I've opened an issue which is not there, which is upgrading Poupette to 7.whatever. The upgrade to Poupette 7 is a blocker for us before being able to upgrade the virtual machine to Ubuntu 222 because Poupette 6 is not supported. Same thing for using Poupette on PPC or CPU Z or whatever exotic CPU. Poupette 6 has word behavior. It's OK for Intel and IRM. But it's word for the rest. We need to move to Poupette 7. Any question on this one? On hold for only days. Download slash latest directory out of dates. I'm so happy to have you there, Daniel, because I will need to help in upcoming weeks on this one. The latest directory is a sim link on most of the original virtual machine on updates. CIG on updates Jenkins I.O. The machine with the packages on the update center. There is for both plugins and core updates. There is a part of the script that say, oh, if it's the latest, then update the sim link to the latest directory. And so Apache serve the content of the sim link, which is fine. However, for the mirrors, there is a process that run every hours on the package virtual machine that takes quite the amount of resources. And that process goal is to synchronize the latest changes in plug-in cause on any binaries to the get Jenkins I.O. mirror service. So then get Jenkins I.O. As that Azure bucket file where all the files are put, it check it, it get the signature and then it's able to determine which mirror is providing that file. And the process of synchronizing uses something named Blobix server, which is a common line by Azure. It's a kind of air sync but for the Azure file storage. And it sounds like that we cannot upload sim links. I'm not sure if it's the version of Blobix server of it's missing support on the Azure file storage or maybe both. But the behavior is the following. If you don't have the latest directory on the remote Azure file bucket and you have a sim link on your source when it synchronize it the first time, it dereference the link and upload a copy. So you have a folder which is a copy of the latest when you read the link. So during the next synchronization if you have updated the plug-in with the new latest up to date it doesn't consider it as an update on the remote system and doesn't update it which mean the latest directory is always out of dates the after the first release. So I'm not sure if anyone has the knowledge or if we should try that require updating the synchronization script which is quite the amount of work because we need to add a step that will cover all the latest sim links and ensure and resolve them and force upload them. Right now, the synchronization says don't override something that you already send. Or we delete them first on the... Absolutely. Absolutely. That's a good idea. It's just that scenario that I don't know enough. I'm a bit scared to do that. I'm not sure if it was talked by Olivier or you, Daniel, in the past or if you didn't know. I haven't been involved in the blowbacks for... Mm-hmm. Yes. I'm hesitant to change that. I'd rather accept that this latest directory is out of date than risk, deletion. So... But Damien, if it's okay, let's take it up later. I think dropping it from this, this work, the planned work is fine with me because it's just not that crucial an issue. I agree on this one. So, especially... So, what we did a while back is I remember the download page on Jenkins IO used to refer to the latest link. The problem with that is that the page as rendered explicitly stated what the last version was. And so, if the site hadn't been regenerated, but the latest link was updated to a new release that just came out, there was a discrepancy between what Jenkins IO slash download said and what you were actually downloading. So, what we did was the URL, this links to is now a specific version, as you can see on the bottom there. And this was especially important once we added the chart checksum because I don't know, as people in that case wouldn't just get a newer version than they were expecting, but the checksum also obviously wouldn't match. So, I'm not sure whether we in the project still have a use case for the latest URL. And so, perhaps access logs and referral URLs could be used to identify whether we still link there. If we do identify whether this is actually important to have and if we don't, we just drop them. If it exists, it should work. So, the options are make it work or make it not exist. Absolutely. I've tried an experimental process. It's with only checking the latest and uploading with override only the latest directories. But it takes some times. So, it's additional times before having everything in sync. That's why I really love the idea of do we really need the latest. The only point here is that I'm not sure how we could ask the mirrors to remove the directory because not all mirrors are doing destructive synchronization. They could with error sync but not all of them are doing. So, we might need one garbage collecting task contacting all of these mirrors administrator to say can you remove the latest directories one time? So, fun fact, we've never cleaned up anything. There is some ancient crap on the mirrors and we haven't cared in many years and I wouldn't start now. If people access the mirrors directly they've already taken the wrong turn somewhere. Yeah, fair, fair. That's absolutely a fair point and we could absolutely blacklist the latest link from Gage and Kinsayo. Once it doesn't stop having the directory latest for each then it should not serve to mirrors which mean what you just said people are really wrong if they are using the direct link to a specific mirror. Cool, so let's all done on this one for sometimes. Thanks for the details. Next one, enable development integration in Jira postponing to September. That's too much. I don't have access to the Jenkins CI so that mean back and forth with people with these permissions that require all token that is a whole permission issue. We doesn't support GitHub app. So, I propose that we do it once we have everyone on board. I'm a bit worried by this one. Maybe it could be just Jenkins CI but that require clearly more discussion. The request is clearly legit and will help a lot developers and contributors. James clearly stated that and that makes sense. But right now I don't see any proactive action that we could do because that require administration right on both Jira and Jenkins CI GitHub organization unless I missed something. So, we assume that Tim and Daniel have these rights, maybe Mark. I don't have... I only have on Jira for myself. So, if it's okay for everyone, I will add a message on the issue say we postpone to September because that's too much work that is hard to understand and anticipate. Might be easy, might be not, but I don't want to put so much variation on our work in the middle of August. Right. Another point here is we have like one and a half years left or something with Jira. What's the URL? Yeah, that's a... So, Daniel, I think you're assuming knowledge from others that they may not have that is that Jira has announced the end of hosted Jira instances that everything Jira does will be cloud. And so, we see that the end of Linux Foundation's hosting of our Jira instance unless Jira revises their stance. Did I state it correctly, Daniel? Right. And I don't think that Lassien will change their opinion. They're working on making a cloud Jira work with more user accounts which is the major problem. There was a short quick discussion on the dev list recently where this came up again and I think Lassien are up to 50,000 users. They support with plans to go up to 100,000 or so. And I think we have 130,000 users in our user directory. Of course, the vast majority of those are unused or haven't been used in forever. So, with some cleanup, we might be able to migrate there. Of course, assuming that Lassien sponsors this because otherwise, it will be a little expensive. So, but with the EOL announced, I think for early 2024, I would be hesitant to spend a lot of time on changes to our hosted Jira, especially if those changes would potentially make it more difficult to migrate off afterwards because we've adapted workflows and such. So, if this is easy to do, sure, if this requires work from us, maybe not. Right. Yeah, knowing that Jira is is destined for end of life and we can see it, right? We should not invest much in our Jira, any heavyweight in our Jira instance. OK, thanks for the precision. I took the back of saying, OK, they should keep still hosted. So, yeah, they are definitely going full cloud. Oh, crap. That's not a fun one. OK, so no action for us right now. That will be a wider discussion, a community wider discussion then, I assume. Thanks. Next issue is the CPUZ agent mark. I assume you didn't add much time on this one. I have not done anything on it. I'm still using it from my test instance, but I haven't put it back on CI yet. OK, before removing from the milestone, can I ask you to share using the encrypted SOPS directory, the credential to access it with a technical user that can sudo? The goal, as I said, two weeks ago is for us to be able to try compiling the Ruby gem for Puppet. So we can see if we can have a running Puppet agent from that machine, which will solve a lot of issues because we already have Puppet managing a few agents so that will automatically install everything needed. Let me see if I can work with someone and find a way to share those credentials. Yeah. Thanks. I think we're ready written, so otherwise I need. The big one migrating updates Genkin SAIO, so the update center on another cloud for bandwidth reasons because that costs a lot. So right now, we are working on the Puppet configuration. Stefan did the heavy loads. We have reached a point where we are able to deploy configure Apache and prepare everything. The next step for Stefan, before going in holidays, will be to update all the script that we use for the releases. For generating the update season. So it's uploaded also to the new virtual machine, not only to the current one. And then we'll put on hold for the holidays to continue that and the focused because that's a sensitive topic. But that's almost 4K per month of bandwidth that will go to a few hundreds. So that's clearly something worth to work on. Yeah, the status is that Did I forget something, Stefan? No. So that one is still on the next milestone and then with Stefan holidays that will move on the back lock. Is there any question on this one? Nope. New important elements. I got some. So there has been a topic started earlier today. I think or yesterday about cleanup proposal of communication channels. Before I have. So there might be advices, but we are I want to restrict the discussion only on the Jenkins Infra IRC channel. So only the channel that we use to discuss. I'm not mentioning the one with notification connected to the bot. Only the one we use there. The proposal is to consolidate to use either discourse or Gitter. Are there any opinion? Strong yes, strong no. Considering that all the problem that it could have on our communication channel or the improvements. What are your thoughts, folks? Subjectively, both discourse and GitHub have a higher barrier to posting stupid questions that end up being worthwhile to ask. Then some thing that's actually a chat like interface. Maybe that's just me, but I probably wouldn't open a new topic just to ask a dumbass question. Or what I suspect might be a dumbass question in either of those. Or raise an issue on GitHub. So that's why I would prefer there to remain an actual chat, but that would be my feedback here. You don't consider Matrix Gitter as a chat in this sense? I understand correctly. Yeah, so sorry, I understood GitHub as in GitHub issues, not Gitter. Okay, yeah, then no problem there. Okay, yeah, yeah, the goal is... So I got mixed feelings, but I think I agree with Daniel on this one that we need two communication channels. Right now, we have that real-time communication through chat on IRC. For me, that will be a huge improvement moving from IRC to Matrix Gitter. Because I've counted during the past eight years exactement 182 personnes que j'ai rencontrées qui ont refusé de poser une question sur le project de l'infrastructure de Jenkins. Parce qu'ils ne savaient pas comment faire que les choses d'IRC ne fonctionnent pas. Ils n'ont pas d'histoire, ils ne savaient pas. Et ça m'a pris trois ans. Donc je sais que je suis malade. Je n'ai pas été le meilleur expert, mais honnêtement, l'IRC a été un pain durant les dernières décennies. Et les arguments sur la liberté d'utiliser et de pouvoir s'y prendre, c'était un argument très important et fondamental, mais depuis la takeover par FreeNode l'année dernière, deux ans plus tard, ou la honte qui nous a mis à bouger, à bouger à chat pour soutenir l'article. Maintenant, je ne vois pas si nous l'aimons nous-mêmes, je ne vois pas vraiment la solution. Donc pour me, Gitter est le... C'est commercial, c'est en train d'utiliser quelqu'un d'autre, donc on est toujours à la risque d'avoir quelqu'un à pouvoir s'y prendre. C'est une honte qui existe. Mais au moins c'est vraiment facile de l'utiliser. Il y a plusieurs... La barrière, avant d'être capable de poster la message est close à 0. Vous avez 7 manières d'authentifier à Gitter sur un clic. Donc je n'ai pas d'inquiétude de l'IRC à Gitter si ça m'aimait de la cohérentité. Mais je l'avoue avec Daniel. Je ne vois pas le point d'avoir un éclairage de discours sur la salle de Fréd, je ne sais pas le nom. Je ne vois pas le point maintenant parce que c'est ce que Daniel a dit. Mais il y a peut-être des cas de utilisation, je ne sais pas. Peut-être pour l'annoncement. Peut-être si on est capable d'automater le status de Jenkins I.O. Peut-être poster automatiquement les réponses de ces conférences, il y a d'autres outages. Ce sera quelque chose. Oui, il est juste proposé de changer de l'IRC à Gitter pour ces deux premières... ces premières... Absolument. Mais c'était une expérience de discussion. Mais peut-être qu'on a un nouveau cas de utilisation, peut-être qu'on bénéficie de l'utilisation des discours ou pas. Est-ce qu'il faut l'investissement tout le temps pour s'occuper de ça? Est-ce qu'il y a des réponses sur la salle de Fréd? C'est la première fois dans ce cours. Je n'ai pas vu n'importe quoi. Vous vous rendez-vous des fois. O'Bruno nous a donné une horde. Merci, Bruno. Oui, je pense que vous avez dit la question. Donc, le premier point. Est-ce que c'est ok pour changer de l'IRC à Gitter? Est-ce qu'il y a une opinion pour ne pas le faire? Oui, c'est ok. Ok, donc c'est le... Je vais communiquer ça en bas, comme je l'ai dit, je vais discuter de ça et donner des réponses. On peut aussi le demander dans l'IRC-channel. Oui, bon point. Parce que peut-être que certaines personnes ne sont pas en IRC, elles ne se sont pas... C'est donc ce cours de fred. Il y a quelque chose qu'ils n'ont pas sur le projet de cleaner. C'est la liste d'un mail pour Jean-Kin Sinfra, le public. Qu'est-ce que tu parles de ça? Est-ce qu'il y a quelque chose qu'on devrait garder ou pas? Pour moi, pas. Je n'ai pas d'utilisation. Je pense que la plupart des temps j'ai étendu le spam. Donc je pense... Je parle de Jean-Kin Sinfra et des groupes de Google. Right, right. Je vais juste vérifier ce que le volume de spam est. Il y a quelqu'un d'autre qui doit se procéder. J'ai l'habitude de procéder. Je n'ai pas trouvé la liste spéciale pour moi, mais c'est un volume bas. Tu parles de quoi? Daniel, Harry, Stéphane? Je ne sais pas si c'est un volume bas, mais c'est la même chose pour avoir un peu de l'air. Je me sens plus comme avoir un peu de l'air et pas avoir trop de l'air de la disturbance et de la plupart des temps de la même question. Je suis plus comme en utilisant quelque chose qui est plus dans le cas de l'utilisation des gens. Plus de gens utilisent ça, c'est mieux. Si c'est un outil, nous devons utiliser cet outil. OK. Qu'est-ce que tu parles, Daniel, sur celui-là? Sorry, qu'est-ce que tu dis? Pas de problème. Qu'est-ce que tu parles de Jean-Kin Sinfra et des groupes de Google et des listes de publics? Est-ce que c'est quelque chose qui s'occupe d'un problème? Qu'est-ce que nous devons maintenir? Réplacer par quelque chose d'autre? Juste relâcher, arrêter de l'utiliser? Pas d'opportunité. Je ne pense pas que j'ai utilisé assez pour vraiment avoir une opinion sur ça. OK. J'ai exactement le même feeling que Daniel et je suis le Jean-Kin Sinfra électeur de l'infra-officier. Je voulais juste dire, je n'ai pas d'utilisation de l'Ontario, mais tu n'es pas. Je n'ai pas. Et parfois je me sens mal parce que je suis sous l'impression que certaines personnes utilisent ça. Ça semble que les activistes que nous six sommes, sont là. Est-ce que c'est OK si nous nous demandons cette question sur la liste de mail? Devez-nous maintenir la liste de mail et voir les des réponses? Et si nous ne sommes pas là pour des réponses, nous pouvons juste arrêter de l'utiliser et d'enlever l'archivité, ce qui veut dire que nous gardons la ligne privée et la communication qui sera à l'aéroport pour la communication en réel temps et éventuellement, nous pouvons penser sur ce cours, dans quelque somme, si ça a un problème, non plus. La communication synchroniste a déjà appris sur l'issue de l'aéroport donc ici. Ça semble bien pour tout le monde? Oui, je l'accepterai. Les nouvelles soucis, je ne sais pas, il y a un set de nouvelles soucis, si c'est ok pour tout le monde, je ne vais pas les écrire depuis que vous avez déjà eu suffisamment de travail là-bas. La seule chose pour avoir un ION est le GDK update pour le 11 et le 17. Ça ne pourrait pas se passer dans les jours de l'affaire. Ça a déjà appris et c'est déjà disponible sur les machines virtuelles et les templates. Ça a été automatiquement élevé, ce qui signifie Windows, Linux, Intel et Irem. Ce n'est pas disponible sur les tools de CI, Genkin, CIO, parce que le script d'updates automatique n'est pas seulement vérifié pour ces quatre plateformes, mais aussi pour l'exotique PowerPC et CPU-Z. Si c'est ok pour tout le monde, on va essayer de expliquer le processus d'updates entre ces deux plateformes exotiques et les autres. Ils pourraient éventuellement avoir une nouvelle version de GDK, mais ils ne sont pas connectés aux contrôles et ne sont pas utilisés. La priorité sera de donner la dernière version de GDK dès que possible pour les bâtiments normales. Ça ressemble bien pour tout le monde. Et donc, dans l'arrière, c'est donc l'issue de l'opinion. C'est le dernier élément. Nous devons commencer en marchant et d'updater ceci sur les images de Genkin officiellement sur les contrôles et les agents. Sur les agents, parce que nous avons utilisé ce élément pour la construction de la week-end. C'est la raison pour laquelle nous avons utilisé une version d'oldo parce que nous pointons une version d'oldo remotée. Cela a été accéléré par ce que l'on a vérifié qu'est-ce que l'on a trouvé qu'est-ce que l'on a trouvé. Avec le change de version semantique à la nouvelle version longue que nous utilisons pour les plug-ins, le component remoté est aussi suivant ceci. Et notre processus d'updates est configuré pour monitor les versions semantiques. Donc, ce n'est pas possible de choisir une nouvelle version depuis maintenant. Donc, nous devons utiliser la dernière version. Cela doit nous permettre d'updater l'infrapackage de Genkin CI. Donc, le objectif c'est d'être sûr que nous avons l'updater GDECA11 pour la week-end la semaine prochaine, Stéphane. Depuis que vous seriez seulement là-bas, nous devons être sûrs. Donc, nous tenterons de vous mentionner ces updates. Donc, vous avez juste à vérifier que votre équipe peut être votre backup et vous remarquez aussi dans l'arrière. La raison pour qui il s'agit pour l'image de l'image official de Genkin c'est parce que GDECA11 a des portes comme dans l'exchange une porte de support de la version C-Group 2. Parce qu'aujourd'hui, si vous roulez l'image de l'image official sur une machine Ubuntu 222 ou sur d'autres machines de desktop Docker qui utilisent la version C-Group 2 pour limiter la source de conteneurs, vous pouvez spécifier de la limiter de l'image. Java va simplement vous ignorer et vous regardez sur les limites directes. Ce qui signifie que nous sommes cinq ans plus tard quand Java n'est pas capable de garder avec les systèmes de conteneurs. Et vous pouvez mettre quelque chose de la mémorie votre contrôle de Genkin sera OM-Killed. Nous n'avons pas assez de production réelle en train avec la version C-Group 2 maintenant. Mais nous commençons à voir les gens complétenir. Et personnellement, je vais vouloir éviter le système administratif de s'adapter à ce genre de problème qui sera Oh, cette version de Java embêtie sur la image de Genkin n'est pas capable de lire la version C-Group 2. Donc, nous devons élargir pour élargir cela pour éviter ce genre de support de soutien et d'utilisation. Au moins, nous sommes à l'air que ça pourrait arriver. L'infrastructure de Genkin n'est pas concernée par ce problème. Parce que toute notre machine est encore en train d'élargir sur OM-Killed 2-18, qui est encore supportée par LTS et de la version C-Group 1. Donc, notre current GDK-11 est OK. Si vous utilisez GDK-17, c'est supporté depuis quelques mois. Donc, vous n'avez pas à l'élargir. Ce n'était qu'une seule point pour l'élargir. N'a-t-il question des choses non claires sur ce sujet? Non. Le risque bas, le threat bas. Oui. Je vous en prie. OK. Qu'est-ce qu'il y a d'autres topics que vous voulez que nous pouvons avoir oubliés ou que vous voulez mentionner? Non. Juste warm welcome et merci à tous les gens. Je suis très heureux de travailler avec tous les gens. Et je suis vraiment très heureux de l'interaction et de toutes les choses que j'ai appris. Je continue à apprendre. Donc, maintenant, je vais aller aux holidays et j'espère que vous allez avoir une bonne semaine, une bonne semaine et vous voir dans une ou deux semaines, selon qui vous parlez. Je vais maintenant arrêter les récordements. À bientôt.