 Yeah, absolutely. And I was going to get the meeting notes up, but I think I'm one week off from the meeting notes. I show notes that are dated the 22nd. Yes, I've had it second. Today's notes. Oh, there we go. Thank you chat with the notes of today. Thank you. Okay. I'm ready to start the sharing or start the recording. Oh, the recording is already running. It's already so we can go ahead. So welcome everyone to this week. The cross-structural team meeting. We are the 26th of October. So let me share my screen. So you will see the notes in real time. I can do it. Oh, that's okay. Thanks. So no announcement today, unless I'm missing something. Except don't forget to register as voter for the Jenkins election and to vote. Just a reminder, we expect you to be able to vote to already have made a contribution. And by contribution, I mean not only code that can be issue, help, documentation. So I let you check the Jenkins blog. The election, the old instruction order, it's two click away from being registered as a voter. So we need people to raise their voice. So as a reminder, the deadlines to nominate people this week on Sunday and to register for the election, the deadline is next week. So I don't have another announcement. I saw noted in the in Jenkins release channel IRC that next week's LTS is a security release. And Daniel had asked for being more conservative about merges to the main to the master branch of the Jenkins core repository. So just be everyone be aware that because it's a security release, we'll use our our will follow the security patterns. And on my side, I have a personal note because I was responsible last week for trying to fix something to help the release, which ended the finishing the release late night with due to my changes. So yeah, let's try not to merge anything. Even if it we have something broken to let Daniel quickly release and then we fix the service afterwards, unless it's something strictly required for him. I guess you also have done to disable the weekly release. Correct. That's right. So disable disable weekly prior to the Tuesday scheduled build, right? Absolutely. Okay, so can you confirm that the LTS release should happen Wednesday and not Tuesday? The LTS and the weekly will both happen Wednesday. And that's why we disable the weekly on Tuesday. Okay, thanks. Okay, people. So let's do you do you have other announcements? That's it for me. Okay, cool. So we can go ahead. We just finished the upgrade of the engine x ingress on the AKS cluster. That was a not easy one. We ended up on one outage almost on some services, some services. We had a bunch of issues. Half of these issues were mainly because we had we had too much elements to put on that upgrade. But it was mandatory. So we were overwhelmed by the amount of tasks. And we ended up forgetting some tiny ones. But yeah, when you have 100 lines and you missed the last one, you missed it. And that's not an issue. There are improvement in the amount of, let's say, Helm charts code that we have. It was hard to get to, at least it's a personal opinion. There were a lot of elements. And it was hard to know which one were in production, which one weren't, which part to focus on. That will be a nice improvement for us and an easy one in the future. However, that was a hard one because the migration path for using the new ingress modules was not easy. We had good surprises and bad surprises. It should be okay right now. We are using the latest engine version, the latest ingress. And we should be able to upgrade Kubernetes to the next version without risking a depreciation, at least for the ingress element. So thanks a lot, Hervé, for all the work you did on that one because it was a lot. And without the test environment and without access to the secret ski, you had to guess some elements. So that's another improvement, opening access to some secrets to Hervé to make it more autonomous for the future. Thanks also Olivier for jumping on the call today for helping us. It was nice to have a third pair of eyes on your experience and your feedbacks. That helped a lot. So thanks folks because that one, that was not an easy one. And we ended up upgrading. So that's cool. Is there any question about that upgrade? No, just congrats because we made that upgrade for quite a long time. Yeah, I agree. Congrats. And do you want a separate retrospective or is it not well enough captured here that we go with the notes that have been captured? I think maybe a quick retrospective would be nice to see how we can improve in the future. The work done today was, I mean, that was difficult to plan everything in advance because of the size of the team. But it would be nice to identify how we can do better in the future to avoid a lot of time. Do you want to do it now or do you want to do it as a separate? I think separate exercise. And at least for me, it's worked well to have a retrospective document that we then make notes in and think about it separately before we do any discussions. Yep, good idea. So we can do that synchronously and then discuss. Okay, so I'll take care of creating that document if it's okay for everyone and to share it with everyone. So everyone will be able to put their own notes, ideas, and improvement proposal. While working on that part, we had some issues also caused by the plug-in site. So it's installed with two backends, replica and one frontend. And the issue are related to DNS. The second replica was always in DNS error and it's unable to resolve some DNS without any logic. I'm just wondering if it's related to the recent Let's Encrypt Root certificate change. It's pure DNS issue. Maybe there is something to Let's Encrypt. This I don't know. I'm just wondering how old that image is. So the image is relatively recent. It's less than one month, but the base image it's built upon has not been updated since two years. So I've sent a pull request and right now I'm going to release and I will try to deploy a version. We will. That will be interesting to do that in Perth, though. It was using JT9 Alpine, which hasn't been updated since two years. Since the Alpine tag is not mentioned anymore on the JT Docker page, I assume they dropped the support for JT. For sure, two years ago, the running Java on Alpine Linux was a nightmare. It's better now, but I understand. So we switched back to the slim version, which is built on the OpenJDK8 image. It's still not Adoptium, but it will be still more recent and built on Debian. Since it's DNS, we don't have formal proof that it's related to Alpine. However, Alpine 3.9 was really well known on Kubernetes area to have some issues. So I've added some links. So the idea is we drop Alpine and we're going to try with this one. So that means upgrading and maybe breaking the plug-ins sites. So we will have to update Statute Jenkins for that. We have the Fastly port in front, so we should be okay. However, it's the back end for research, so we might break the research now. Where is that PR? You can just confine it. It has been merged already. Okay, that's why I'm in. Let me add the link in the notes right now. If you merge the PR, then I only look for OpenPR. There are some links. So the links are NS lookups from Busybox is having word behaviors. And there is Nathaniel Kopa, the lead maintainer of Alpine Linux that points some efforts that has been put in Alpine 3.11 that solve a bunch of the DNS issues. Yeah, so that one was pretty important. Maybe we missed some things. I really hope we won't break here. That means that also plug-ins site is not on the standard Docker image build on the automated system that create automated release and update everything. So we might have some work on that area. Quite easy for newcomers. Yeah, but I don't think we should wait for newcomers in this case. I think we should just fix it and move forward, especially if we need to build a new Docker image. We have two newcomers on the rubric today. Make a point. LV and the DTI, if you're interested, that one could be easily fixed with monitoring if you want. So just for the context, the main work on automating the core image process. So if we use idolint to link the core file, we use update today to update. I mean, we have quite a lot of automation in place. The only thing we need to do is to update the Jenkins file. So we use the new process because on that image at this stage, we are just, I think, running manual commands. So it should be pretty straightforward to automatically build new Docker images each time we modify the kit repository. I've added the link on the notes to the existing documentation, which is, let's say, 80% complete. And I'm also adding an example for anyone who is interested. I should use the correct link example on, okay. So that's all on plugin sites. Do you have any questions or things not clear on that topic? Just one thing regarding the plugin site, we may have to create a repository on Docker Hub. I'm not sure if we already have one. Or rename the existing one. Do we already have one on Jenkins CI? Yeah, Jenkins CI in frah slash plugin site. Okay. So we already have one. That's perfect. Or upgrading as it might outage. Okay. On the topic of the AWS cost management, I need to add the two, two screenshot I took this morning that I already shared internally because it's CloudBee Sport. We saw a decrease on the daily costs on the AWS account, which is quite visible. We cannot be 100% sure it's a causality of the IMM label thing. It might be related to just less activity on CI Jenkins. But it's the good direction. So let's continue. At least we are decreasing the cost. Next step will be work. We should work this week on using spot instances for both agent VM and EKS cluster, we said last week. So that are the next priority step after the ingress in Unix. And we have the digital ocean part. Did you have any feedback, Olivier, on digital ocean? Okay. Wish we might want to escalate with Kevin. I don't know if he's still involved in that part or not. Another, we should also send a ping reminder since it has been one week by email. I would take care of that if it's okay for you. A note about the transition of the CloudBees to CDF account so we can apply credits. It's now also the top priority. So I'm currently trying to get all the emails that were taken by Oleg, Mark and Olivier to take care of that subject. So it's still work in progress. I will try to report next week. Isn't the first transition from one CloudBees account to another CloudBees account so that we make them the separation of the two? Yes. I just want to get all the elements that were shared because not everyone was involved in all the parts. I'm trying to aggregate everyone and share that to all of you to prepare a plan that will be shared so no one can take over if needed because there were too much subject between the internal CloudBees issues and email exchanges with Amazon, CloudBees and CDF. So that's why I just want to be sure we don't mess it because the risk is if we have an account which is not related to a credit card that can afford the payment, which is 10K per month, that's for today. That means stopping Trusted, PKG and a bunch of the machines so the average could be quite important. Right. That would be disastrous. Exactly. I don't know how much time before not paying you can do on AWS but I just want to be sure. We don't want to try. Right. That is not an experiment we want to perform. Okay. Issue through age. We should have done that but with the ingress and genics that took more time than expected we had to delay. So the idea is continuing the effort, especially Olivier has a bunch of issues that might be either closed or taken over by someone else or stopped working on. We started something but we need to go with Hervé. We tried to create a project on GitHub. We did some experiments. I propose that we report on that next week time to continue the experiment because it's too young on advice. The goal will be to have an aggregation, to have a kind of dashboard that will be partly automated because you cannot do that fully on JIRA. It's really hard to choose between two tools and it's infreundly related that could help to keep track for a given subject of all the issues and pull requests that we have because right now we cannot do that on Jenkins and there is no easy way to use it to do it. The last object is CI Jenkins. There was an issue with the Azure VM plugin since a few days since the last time, since the IMM, since we applied the IMM part. The reason is that there was a bug on the Azure VM plugins that has been fixed. I assumed during the weekend by team. So it was working. It fixed the issues with the configuration of code. So, okay for us, no outage. However, if you had to use the UI to save the configuration on the Cloud Agents, you were greeted with an angry Jenkins. So team fixed the plugin and since he helped me yesterday and we updated all the plugins on CI Jenkins, so you know which ended up it worked. So it has been updated and everything is up to date with recent version. So that's all for me. Do you have other notes or topics that you want to speak about to share? Okay, sounds good. That was quite a lot of topic in today's work. So congratulations. Yes. I know that Erwe deserve to take a nap between the children, short night related and long days, you will deserve a long nap. So if it's okay for everyone, you can stop the meeting. See you next Tuesday.