 Hello everyone, welcome to the Jenkins Weekly Infrastructure Team Meeting. Today we are the 20th of December, that's the last meeting of the year. Around the table we have myself, Damien Duportal, Hervé Lemeur, Mark White and Kevin Martins. Let's get started with the announcement. So the new weekly 2.383 has been released, change log approved, checklist in progress. Thanks Mark for confirming. We haven't deployed it yet on our infrastructure, neither checked for the docker image we'll do after the meeting. But that should be good, I don't think the change log was used on this one, so yeah, it should be okay. So reminder, no team meeting next week, no team meeting next week. Let's see each other next year. That's the mandatory joke for the 2 weeks before end of year every year. Do you have other announcements folks? Oh yes, yes, yes, there's a webinar today for webinars today to launch the Jenkins project in Google Summer of Code. Oh, nice one. And calling it a webinar is a bit of a stretch, it's actually a Zoom meeting. But a webinar is a good way to describe it as any. We'll actually just be using a Zoom meeting. We're being a little bold this time and hoping that we don't get Zoom bombed. Crossing finger then. Nice one, so good luck for the webinar. That will be a lot of talented people there I'm sure. Do you have other announcements? Just a note about the first week of January. I think everybody will be as well. So, Stephanie, Kevin, I'm not sure if you will be off this week. Marc will be there. I'll be there. Yeah. And I think Kevin's Kevin, I think you're planning to be in the first week of January. Yeah, I'll be here. I'm only off next week. So, there is one more announcement. The Git plug-in and the Git client plug-in are preparing for a major version change. And there's been a request by the Git plug-in maintainer me to have people help test it. So, version 4 and version 5 request for preparing that and various thread in community. Yeah, and what I'm trying to do the best job I can to be sure that it's reliable, compatible, etc. But much better if we have more people saying, yeah, it looks okay to me too. We can start using it on Infra-CI. What do you think? I'm not sure I'd use it. Actually, Infra-CI would be fine. I'm not sure I'm ready to go with anything like ci.jankins.io yet because we don't like to put pre-release on ci. ci.jankins.io but Infra, yeah. We did it numerous times for the pipeline plug-ins announcement using incremental on ci.jankins.io. The goal was to reproduce issues that were only on i-load or specific cases. Good. I mean, that would make sense. The post on community.jankins.io includes the entries for plug-ins.txt for incrementals already. So, all you have to do is cut and paste. So, that's really great then. If you're willing to use it there, I can then watch to see if there are any surprises. There is no problem if you do it during the two incoming weeks. If it's announced, there is no problem if it's announced on the mailing list. And you say for the four next hours, we are going to deploy the Git plugin and then we roll back to stable. You can do a brown out, is that the wording? Yes, yes. Okay, good. So, that would be a reasonable place for me to submit the poll request to the plug-ins.txt file for Infra ci. And then I can watch it once it's merged, see how does it go, et cetera. Okay, thanks. I'm not yet, I don't know that it'll be today. Today's kind of busy, but... Locker, gently, weekly. Plug-ins.txt. Okay. Okay. That's all for now announcement. So, next week, even if there is no team meeting, XLTS will be 11 janvier 2003. I'm not aware of any security release. Event. And yeah, next major event is the devox. And we have first them before. And the webinar, as we said. Okay. Anything on the calendars announcements? No. Okay, so let's proceed. What did we do last week? That is closed. First of all, Mark, thanks for the fix and for about the Javadoc. I confirm it works as well on CHNKins and on Trusted where it's deployed. So, Mark pushed a fix on the Javadoc builder that under the case that we had with the devops plugin that has been released for the first time. And we don't know why, but it generated an empty jar file for the Javadoc. I assume everything was done manually. So, the problem is that it was breaking the Javadoc generation. So now there is a catch in the code of the Javadoc generator to under empty zip files. So now we are able to update again Javadoc. Good news. It was on the latest LTS or something with the latest baseline as it was on the previous issues. So thanks, Mark. So Damien, I actually had a question about that one if you're okay with me asking you a question before you go on. So one of the things that job does is it creates a 600 megabyte archive and saves the archive with an archive artifacts. I believe that means it writes it to the controller, and over half the total execution time of the job is spent doing the creation of that of that archive, the tar process, because it's using a B zip correctly to compress the thing. So, for any compelling reason for us to save that artifact, I'm prone to say, leave it alone and just admit that take delete the archive step completely, and admit that if somebody needs it will put it back, or maybe parameterize it with the default parameter being off. Well, and see if you look at the next step down, the next step down is where it gets shipped off to the blob storage, where it's actually displayed. Okay, and that for me is a very reasonable thing to do. Shipping it to blob storage is perfectly sensible. On ci.jenkins.io, we don't want to do that because that requires credentials, right, and, and correctly it should require credentials. I was thinking we could, I could netlify it, or we could do something else but it felt like spending half the time of the job, creating a compressive file that no one reads is kind of wasteful. Any guidance there and it's okay if you just say no mark stop worrying about trivial things. Is it the SH step or the archive artifact steps that takes so much time? It's the tar step. It's the SH step. Okay. Proposal. We have, we recently worked with the steps generators. That generates also a website based on the source. And with Stefan and Adrien, we discussed that the archive part should be kept at least on the principle branch only on ci.jenkins.io. That will mean we would have the infra is trusted else. If infra is not trusted, meaning if it's run on ci.jenkins.io or whoever wants to build it and the brand, it's not the principle and it's the principle branch. Then we keep that step only on master but not on pull request. So pull request will give you a quick feedback. But having that on main branch would permit to the contributor to see the resulting artifact file. What's the blockster? The infra is trusted blockster step. What is it uploading? Good point. Good point. As far as I know it is not right, it's relying on blob transfers natural ability to do our sync style only publish things that have changed. And blob transfers, our sync style publish only what's changed means we only get updates on pages that actually got altered. There's a good point that are based on the laying there. If the deployment on production only do a kind of her sync like blob Xifer is the same between two sources, then it means that we don't need to first zip with star visit 2 and then zip it a second time to send it to the controller. We could just remove that part and change the archive artifact to only archive the data site. Ah, right. Okay. And that, that then would allow archive artifacts to do the, well, but won't that then create a lot of small files in the artifact. It's a zip file, so you don't have to. Okay, okay. So we list artifacts and it will it will package them for us. Okay, good. I'll work on this separately then thanks. Sorry for the side trip. Thank you very much for the guidance. I like that. The first point was, it's okay if it's only on the primary branch that we do the archive. And second is archive artifacts doesn't really need us to construct the compress zip file first, we could let archive artifacts do all that work. Absolutely. If I'm doing double work there. There is a good opportunity here to also switch to declarative given how simple that pipeline is. I don't see the value of scripted, which means you could absolutely check what we did for the step generators, and you can have almost the same pipeline. Great. Thank you. Good. Good pointer. So step generator has done the transition to declarative. And you've, you've applied those things already. I can use that. Thanks very much. So you are very, is it what we were asking or did I missed. And we could be avoided on beers. Next subject, thanks survey, but so thanks to a team Alex, and, and the both of them for walking on updating a lot of the ruby dependencies used for Jenkins i your website, which includes some repositories such as ASCII doctor Jenkins extensions, required renovate, which is a kind of dependable to update CLI alternative, which does things in the between. So it's easier for them to use renovate and they asked the infra team to install it on the GitHub organization, which we did but only focused on the repositories that require renovables. So now Jenkins data.io has also renovates. I saw requests from giving a comment on one of his requests explaining why he wanted to get the right test from an PKG for his node module for his web component for the Jenkins value. Oh, yeah. That's one point gone with renovate, I think. Keep in mind for. That's a good point. So the thing is dependable is clearly the most efficient, but it's scope is really narrow. It's able to track way more dependencies, it provide a nice pull request way more. That's the most useful, I think of the free and update CLI requires more maintenance, but it's way more powerful, you can do things more complicated. So it depends, most of the time, going for renovate will help a lot. I'm mentioning that because Mark, I saw that you did pull request today about updating bluish and Maven version and some tutorials on Jenkins.io. That will be worth checking if renovable update CLI couldn't help you on that part by opening pull request automatically when there is a new Maven version or bluish version. Yeah, go ahead. Excuse me. I was wondering if it's possible to have a global variable for specific version available for every document in Jenkins.io. So technically, yes, in practice in real life, don't even think about it because of the amount of tools that are implied to generate Jenkins.io. If it was pure ASCII doctor or pure Ugo, that will be. Even having the version that's in the under the other definition. There has been more than 50% that traded during the past six years. Trust me. Okay, that's a good thing. If you find a way to do it, maybe some of the past constraints are gone and I'm not aware of that. But it's not easy. So if you want to try it, please do, but be aware that you might end up on something like oh my why so much. Yeah, okay, because it was, it was the same kind of thing you suggested. Yes. So what I'm suggesting is that it's part, it's most of the time it's you have a kind of scope per page or per tutorial. But since ASCII doctor is able to have variables, if you are able to at least move the variables on top of the ASCII doctor source page, that might already be the case, I'm not sure. So, then we could use Renovabot or Update CLI. Update CLI, I'm sure it will be able to using a search and replace pattern it will work. I use it for ASCII doctor itself. I don't know for Renovabot, worth checking though. Actually, I believe there's even a ticket raised in the Jenkins.io, an issue raised in the Jenkins.io issue list that we need to automate more things in that those three are three examples there are five or six others. So if you are selling to this issue, if you will find again, can you copy? That's, but I just want to say it's not infrastructure topic. Sorry, I raised it, I moved the scope but yeah. Next topic. So the maintenance migration of repo Jenkins.io was successful past Sunday. So we didn't have any issues, so that was perfect, nothing to do. It looks like based on what you saw on what Daniel saw that it's clearly faster than before. Like at least twice, the build time when downloading artifact are clearly better. So maybe we could, we might, we might use way more bandwidth per month now thanks to that. That will be an annoying consequence, but no, it's still a good thing. Yes, so now we are back to decrease bandwidth. We will have a meeting with Gifrog later today, Mark. Just to be sure, I assume it will be just to summarize before end of year. I don't have anything else about that, we have question points, anyone to underline about that migration. So the next step for us will be ask Gifrog support account and see if we have access to the portal. So I'm sure they will give us direction later today. Forgot username password account not found. As usual, someone open an issue for an account that doesn't exist and then never answer. So closed as not done, even though we ask question. CD failure on SM API plugin. I assume RPU. Temporaire. There was a lot of King Github error last week. Like the commit preference missing. Yes, that's true. So issues on Github, that's made the, that require that the GSC click in that case had to wait for five hours. But the RPU repository permission of data runs should run every three hours successfully to update a token that is only valid for four. So five hours means if you have the token of that is age of five hours, that will be refused by artifactory. So you can fix them. So thanks to Daniel for helping on that point. And finally, Jenkins.io components. So you assume it's given. Yes. Giving created a new repository to or to us to the web components used on Jenkins.io published to npm. For that it was for localizing is Jenkins components. Not for the creation and for the legalization. To the inclusion in the crowd in. Look at the type of issue. They could look at the type of issue. I'm sorry, that makes no sense for me. The crowd in label because it's a crowd in 12. It's for the integration. Into crowd in. It has nothing to do. He created an. Blanc. Blanc issue. He didn't use any template, but it was for integrating Jenkins.io component into crowd in. Okay. Okay. So nothing for us. It was just. They were just using your DL bisque. Perfect. Thanks for the explanation. Work in progress now. Award about EKS. I had to recreate one of the case cluster yesterday and spend all my day doing that. Because without further warning AWS change some default parameters of the case cluster. They suffered from that while trying to upgrade the major version of the terraform module that broke things. That made the default clusters are now private by default. So you have to explicitly change the default. That's a change on both the module and the case. And there were a bunch and a bunch of problems. So I had to recreate it from scratch. Everything is redeployed. It's integrated. I just did that the certificate on repo dot AWS dot Jenkins dot io, the local artifact caching proxy. That's the only application on that cluster. That's why I want to add. That application is now working running, but without a valid certificate. I have to check on that one. More we are trying to use AWS EKS for something else than ephemeral cluster. The more we have issue on waste time. So at the moment in time, maybe it might be worth it to use a local machine instead of that. But yeah, we'll see. Just that's why there was a suggestion at the beginning. We chose to use Kubernetes because we want the caching proxy to be the same on every cloud where we have a Kubernetes cluster. But that might be worth it having a virtual machine instead. I've opened an issue and I'm going to update the issue with all the pull request and commits I did and changes. So it will be auditable afterwards. But I didn't have time to do it in real time yesterday. So work for today and tomorrow for me. Consequence on CI Jenkins io that was only working with digital ocean. So it was, it had a huge build queue this morning due to that, but now it's back to normal. A new issue. So that one I keep for next iteration. Yes. A new issue. I haven't looked at it. Just took it to. Yes. I didn't have time to ask this person if he tried to log in account. We have to look at it for the next iteration. Yes. Don't worry about resolution during the making. Because that might be an issue. That might be RPU. That might be an issue on the user side. So just to mention it that it will be moved to the next iteration. Artifact caching fail on digital ocean and AWS. So an AWS now for sure. No, it's on the free provider. It's also on Azure. I didn't take to collect it. Just I've just disabled. Disabled. I've set the available provider global settings to none. So it's a build plugin. Use the report at Jenkins. CI.org by default every time while I look into it. Okay. Proposal Airways to keep it ACP disabled for the coming weeks. Because the priority for us is to fix the Azure version. And we need to finish the work relative to the networks and the cluster migrations. So is there anyone against keeping it disabled for now and moving that issue back to the backlog until we have finished the network and cluster migration stuff. No disagreement for me. Let's disable it globally for now. Okay. And put it in the next week. Yep. Add Windows Server 2022. So Stefan was able to configure CI Jenkins I.O. To have our first agent template for spending virtual machine with Windows Server 2022. The goal is to allow building a new Docker image declination for that Windows server. Because as a reminder, you cannot run Docker image based on Windows Server 2022 and 2019. And you can do the other way around. So we need to maintain both at the same time until we don't need 2019. So that issue is not closed, but we'll move to the backlog. As soon as we will be able to answer to the contributor on the Jenkins CI Docker images. A pull request has been open. I need to try one time that that pull request build with the new label. It's only Windows dash 2022. So it's an explicit label. If it works, then we can tell the user they can continue working on their pull request. And that issue will go back to the backlog. Main reason is because right now. We weren't able to find a way to install the official Windows Open SSH on Windows 2022 on AWS. Which means that our system is not able to fail all the time. And Windows Server, both versions on AWS take now one hour and a half to update themselves. That's a nightmare. So right now 2019 is on Azure and AWS for us and 2022 is only on Azure. So Stefan propose, let's see how it's going. But maybe we might have to only keep Azure as a provider of Windows Server virtual machines, at least for the upcoming month. Because we are wasting too much time on this build. So that one, I'm finishing trying with the end user and then I move it back to the backlog. Hello newcomers. We have two persons joining. Javanet cash is no longer considered by the update center. I've reopened that issue, which is about Javanet's two repository on Artifactory, which upstream is gone. There are at least two local repositories. One that I did two days ago and one which is more ancient. They are private, not visible by default. But both of these have all the artifacts. We can, we should check exhaustively, but I checked the checksum of the global repository and it looks really good. So at least if that repository is gone by accident, then we could absolutely add it on our tree. So we'll wait for Daniel to give us history when he's back from holidays, so January. But yeah, we have at least three copies and we have the old Artifactory instance migrated Sunday as a backup of backup. I wasn't able to find a way to download these artifacts yet. Because all of these repositories are private and marked as blacked out. So you cannot see on the Artifactory front end anything. It say no item, even though they are. So that issue is open. So we keep track of it. That's important. And I'm waiting for Daniel. So no work expected from anyone until Daniel is back. Any question? Your turn. So close the new Kubernetes cluster private gates that aims at hosting in fraud.ci and release CI once everything is okay. What's the status of that new? The cluster is up and running. I've deployed on it yesterday. All services. Not directly for Jenkins. And today I've deployed a controller on it. With all on fraud.ci Jenkins.io configuration except search job definition. It seems to be working, but I can't access the instance. I can access. I can ping the machine from the. On the VPN machine. And I'm working with SSH, but going from my brother with Austin tree on my machine doesn't work. We are working on it. So good news is. I had to change a credential name in my factorization. For having to have a CI instance. You know, and file a config. And he didn't break the current on fraud. It was an error we were showing a bit. It's nice. Nice work. So now you will have to get your hands dirty with IP tables rules. I'm sorry for you. I will try to help as much as possible. So which means as soon as we are able to fix that VPN issue and that we can reach a fraud. We will communicate about migrating the current infrascii to that new instance. So then we can trash the temp dash private gates cluster. That's the next step, which means they might be interruptions when generating the monthly weekly reports when deploying Jenkins.io privilégialisation site, etc. So we will have to communicate on status Jenkins.io. And IRC. We also plan to do a full migration, full clean migration, meaning we will back up the Jenkins home and migrate it to the empty instance that they created. Because last time we migrated infrascii, we didn't care about the build history. We do now because there are other jobs in that instance in that controller that are not infra related or not directly to our tooling. So that will be your next step perv. Nice job because yeah that was a lot of shadow walk, especially in the network area, Terraforma area, Azure and IPv4 routing. So really cool to have that part done by someone else than me. I'm really happy for that outcome of it. That's really nice job. About that issue, recreate network to think of a lap issue. That one is open, but you'll go back to backlog. Or we can close it and open a tiny issue. The last item should be a VPN for the public network. Last step, VPN VM for the new public network. So that's why I propose a new issue, because the network part for me is finished. Even though we haven't validated the IPv6 port. So my proposal is rename the issue and create two sub issue one for validating IPv6 later because we will have to do it. Because the huge work there has been done and the VPN VM is not required for the public network. Because we can use the private VPN to reach through the peering to the public back ends. But we will need that new scenario for a new infrastructure contributor. Such that for instance, Adrien Lacharpentier, He need access to the public cluster for the public scoring application. But we don't want him by default. There is no reason for granting access to the private area within France CI or release CI. So the idea is to have VPN that are aimed at people that should operate or at least read access to services on the public network. And that one is not mandatory today, because what is mandatory is administrator having access to everything. And people who are currently accessing release CI for releases today should have the same access, which is a private part that we are working on. So if it's okay for you, Hervé, I will create the two issue and we consider the current one done because you did the scope of the issue that was creating brand new networks managed as code by Terraform as your net repository. And a new VPN so same set of features we have today so for me work is done. Is that okay for you, did I miss the managing issues. There has been now an issue opened by giving about netlify for Jenkins components. I need to check he created the initial handling, we'll have to look at it for the next iteration. Assume, we need a website deployment somewhere on netlify what is choice. Yes. We need a token from GitHub app or. It's the bot token. Yeah, we'll need to operate. That should be a quick one. GitHub app for plug-in if coring. That has been a request by Adrien. I will postpone this one when he's back from holidays. To switch the plug in scoring application from a GitHub token to a GitHub app. The challenge we will have to face that will be present also for Jean-Marc when collecting metrics and other scenarios like this one will be. I'm not sure what is the policy for the administrator of the Jenkins CI GitHub organization, we are not admin, at least not me not Hervé. What is the policy when they install a GitHub app for managing thousands of repository. In the case of Adrien, the GitHub app should only be able to run on the plug-in dash something repository. I don't see all three. We are manager of the application on Jenkins CI. But an admin has to validate the request. Ok, so maybe it's our job to automate. I want to install on that set of thousands on something repository. I'm not sure why. I don't know if there are private repository in Jenkins CI organization. It's not to respond, but I would have asked on the installation on all repository. The person who will say yes and no is Vadek, the security officer. And if I would be Vadek, I would say I don't care, you have to partition the access. And I know it's a pain, but that's too risky. If there is any private repository on Jenkins CI, if there is anything that should not be done. If you don't have to, you don't have to access. I don't know why you were saying automate this process. Because we will have to watch for new repository. Exactly. We will need a process that say there is a new plug-in something. Repository permission updater could be one of the areas that could help. Because the goal of that repository and build associated is to list all the plugins and do stuff with that list. So that could be a way to automate it with the current tooling. Because we are not alone having that kind of challenge. So if it's okay, I'll propose to postpone this one for January. We have some Jenkins mirrors using wrong media type. So it was fixed by OSU-SL, as we said last week. And we need to update the Apache configuration on archive Jenkins. I totally forgot about this one. That should be quite quick. Sorry? No, I was about to say we didn't have time to look at the one. Same for the mirror start report wrong results. The goal is to check the status of the disabled mirrors within that time to check. And realign Jenkins. So that one was posed until the shift log migration. So I need again to check if my setup is still there. If it's still buggy or if it's still eventually consistent or not. I hope it should not. And my next step is to try to build a plugin with a proxy to my test area. That is completely a different set than what is doing right now. That's the worst case scenario, but I have to test to start testing somewhere. And that means I will have a perfect setup to force using everyone on the same repository. To test plugin builds with the test. That's all for me. On the work in progress. For the rest of the backlog, we didn't add new issues. Planning for supported GDK that one is on Stefan and is in holidays. So it's postponed for two weeks. I don't see new issues on the bottom. While I'm looking at the recent issues that could have been opened. Do you have other topics that should be mentioned to be walked during the upcoming two weeks? One, two, three, no. Okay. Okay. So I think that's all for today. I'm going to stop the recording. So for everyone following us on recording, see you in two weeks. And for the other, just wait a minute. So bye-bye.