 I can access the button, perfect. Hello, everyone. Welcome to the Jenkins Infrastructure Weekly Team Meeting. We are the 12th of December. Reminder, last week was canceled due to a lot of team members. He'll all not available at all. So we have two milestone to pack on one. Today around the table, we have myself, Demandu Portal, Stefan Merle, Bruno Varton, Kevin Martin, Marquette will most probably join a bit later. Let's get started with announcements. So, weekly release 2.436 is out. So, war, packages and Docker image. Change log soon, I assume, if it's not already done, Kevin. It's already merged and live. So that means Stefan ready to roll for Infra-CI and Weekly-CI. Is there any question, things, specific things to underline on the change log of that weekly release? Nope, no one has anything. Perfect. So, award on the billing status. I was able to report the cost from last week or from last month, so November. So for the Azure account itself, November was at $7.3K, which is clearly inside our goals for this year. So congratulations, everyone. The goal for December is to decrease below 7K. Ideally, 6.5 should be a nice goal. For that, that means we will have to continue our effort on optimization of the IRM, Intel and NotePool, and migrating workload as much as possible to the sponsorship account. So we won't have to pay for these ephemeral workloads. December 6.5 to goal at... Is there any question on this topic? So as a reminder, we have opened an issue and we started to... The team just started to check the... We have 1.5 up to 2K per month on the... For goodness' sake, Azure storage account is used for get-gen-kin-sayo. So there are improvements, but these improvements will be on the first quarter of 2024, so January, February, March. We might gain quite some money here. However, it won't be... We won't be able to make it for this month, so let's focus on the two open tasks. Okay, for you? Okay, November. So the Azure sponsorship, we are now... We feel safe because in November, we were able to consume the first dollar automatically on the credits. We were able to consume from memory from credits. Let's go for using it as much as possible. So if we are able to at least move 0.5K from the other, that will be a really great goal for December. Any question on this one? AWS, we consume a bit less... We were able to have 9.8K below the 10K goal, but far from the expected 5K. So the reason why we consume a bit less on AWS is because, as discussed during the governance meetings, an additional effort has been done on the bomb builds, which are built either less often and with less tests insights. A special care has been taken by Basil, Alex and Mark to decrease the cost of the infrastructure through the bomb builds compression. More in 2024, but thanks for that, that allow us to consume less credit on CloudBiz accounts. And there are discussions to see what will be the next step. As a reminder, our currently open action item for decreasing that credit is using the new update center. And we are relying on the security team analysis for this one. So no change expected. We should try to stay around 10K for December. So no specific goal here. Digital Ocean, is that okay for you to update us on that topic, even if we have an action item later? Yeah, so as I've asked for the new one of the Spencer Cips, they agree, they're happy to renew it. They just ask us to wait until the second one is finished and to ask for new one in January. But it should be okay. Cool, I'm currently checking. We have 960 left for December 2024. Last week, we consumed, we consumed, I promise next week I will check this billing status. We consumed 766 in November. Renewal for January. So that should be okay. If we see that the renewal takes some times, the immediate action item for us will be to disable Digital Ocean cluster from CIG and Kinsayo, which will immediately decrease the credit consumption. If we do that, archive, Kinsayo and the ACP cluster will consume around 80 bucks for the month of January. We should have eventually 100, but yeah, we are okay for our Digital Ocean. I haven't reported in detail for the other billing status, but yeah, it's mainly Fastly and nothing to say here. We have an almost constant consumption on Fastly, which is 100% sponsored. So no need to spend more time on this one. Do you have any question on the billing status? Okay, I'm gonna try to check this on this format weekly now. So we will follow weekly. If it's too much, we can always change, don't hesitate. That's us me trying something. Important announcement, repo jinkinsci.org, which is a service, which is an artifactory SaaS instance hosted by GFrog for us. We'll have its certificate expiring 20 December. We are late here. I most probably forgot to add a calendar event and usually there are support contact us. I'm currently trying, we are currently trying to find a solution because having a one year valid certificates need to be paid. I wasn't able to find a serious CA provider that provides for three or one year certificates. If you know one which is not suspicious because there are some, but with, yeah, that wouldn't be a good thing if their CA is excluded from web browser during the year. So it's free, but yeah, there is a cost. So right. Yes, sorry? Is it such an issue to renew it every three months? Depends on what GFrog said. I don't think so. It's just that we have to be careful on not forgetting it. The problem is, oh, that mean they had to restart the instance every each time. We will know if it, my proposal is that we use a let's encrypt certificate. We send them right now because we are really short in time and we see if we're able to renew it every three months so that we'll give an answer to your question if it's not a problem. Let's use something that we can master and which is free. Does it make sense to the other because I agree with their way absolutely on. And maybe they can provide something and send us the certificate. So if they can renew the certificate using a let's encrypt challenge with HTTP challenge, they have the web server, theoretically they could. Six months ago, they weren't able to have this. It was on their roadmap. But yeah, as you know, most of the GFrog engineering team for the SAS is working from Tel Aviv. So yeah, they might not have all the time they want right now, given the context. That's a thing to ask them again. Yes, but that's all. I agree with their way. We should try renewing every three months. It's not that complicated. Any objection? Okay, upcoming calendar. Next week, we'll have another release. We will be the 12 and 19. Is that correct? I think so. I forgot when is the next LTS. Let me copy it past from last week. January 24th. Oh, your rocket, thanks. January 20th. Do you know the version number? It'll be 2.426.3. I'm not mistaken, Damian. Sorry, can you repeat, I'm sorry. 2.426.3. Cool. So nothing for us to comply with today. Let's see if there is a security advisory announced. When was it? So, no security advisory. Next major events. First, at Brussels. The 3.4? Oh, 3.4, yeah, you're correct. 3.4 February with a Jenkins contributor summit. On the 2nd of February, yeah. Yep, thanks folks. Do you have other major events? Okay, so let's roll. What were we able to finish during the past milestone, not the one we just finished, but the one before, because we cancelled the meeting last time? So there were a plugin repository archived, including their JRA part. So thanks everyone involved in that change. A new plugin named Pipeline Agent Build History, which allows to see in detail for a given agent all the pipeline build history, because that information was not really easily available before that plugin. That plugin is brand new, and it has been installed on both Infrasci and WeeklyCI for now. It requires a new baseline, so we can't install on the other controller. The current LTS won't support it. We need the LTS bump. So let's see the result of this one. Visually, it's really, really nice. You can go on any agent and see which build we're running. Not really useful for us on Infrasci, though, because we don't have permanent agents, and all of our FMR agents are running one build, and then are kicked, but at least it works. So thanks. Yes? It has flows. It won't work with any instance with lots of jobs. It's a follow-up, and it will flow, and yeah. It's, yeah. I mean, I'm not sure you understood what you said. Can you... There is an open issue. It's a plugin which can be used on the build history plugin. It can be used on instance with not a lot of jobs. If there are a lot of jobs, it will crash the controller. Oh. Yeah. That's an open issue on the plugin? An advice given by Jesse. Somewhere I don't remember where, but he warned us about it. So yeah. OK, without any... OK, OK. Yeah. It's a nice plugin, but not for us, absolutely not. Which means we will have to uninstall it from Infrasci. Because we have a lot of... Infrasci, it's on Weekly CI. It's a demo instance, so it's not a problem there. No, it's on... It's installed on Infrasci, and we have to remove it, which will mean a bit more work, because we will have to split the images used for Infrasci and Weekly CI. We have the capability with BuildX, but that means additional work then. May I ask you, Hervé, to open an issue explaining that, and mentioning Jesse, so we can confirm on the eldest issue that will be an action item for us to work on splitting both images? Because the risk here that you are underlying, and that's a good thing, that means Infrasci might be blocked by this. And some of the issues we saw during the past week might be related. Good catch, Hervé. Good catch, absolutely. That explain some work thing we saw with Stefan last week with Infrasci, mentioning performance problem as we will need to remove it from... And by the way, I'm not sure what is the interest of installing it on Weekly CI because there is no agent on that instance. I just realized this now. I think it was for exposure, but yeah. It's been nice with Marcus, I think. My proposal is that a short term will remove the plugin from both instances and then we can have the discussion about how do you plan to show it on Weekly CI because I believe that will require way more work than expected, including eventually starting an agent. Good point, Hervé, that will help. But I will be interested in having Jesse feedbacks because that will point to the plugin internals to the developer of that plugin. If it's only an advice on a private channel, it doesn't make any sense somehow. If it's okay for you opening an issue and pinging him, so he will be able to take the required time to explain and point to elements. Is that okay for you? It was a pointer in the code of the plugin with the follow up, which is quite explanatory by itself. I am searching the city. And yeah, if you can open an issue for us as an action item. Basil is added to the board team because Basil has been elected as board member. Congrats. And you've been elected as a pro-officer and given as document officer too. Yes. So congrats to both of you too. Thanks. Thanks Hervé. FTP Belnet is back in service because the user disappeared after they were able to fix their problem and they didn't send any trace route or proof that the problems was not on their own while the Belnet administrator didn't add any firewall blocking them. So I've re-enabled the Belnet mirror and if the end user still have issues then they can still provide network proofs. But until then for one user having one issue, never answering once their issue has been solved initially. Yeah, unless someone object, I've added the Belnet mirror again inside the list of available mirror. We had a request from Oleg to be removed from everything on Jenkins and Fra and added to the alumnae special group. So that has been done and he confirmed it. Mark, why it was able to set up his VPN access? So the problem was between chair and keyboard. Sorry Mark, you're not there so that's the right moment to say it. No, it was mostly missing time to configure everything properly on Windows but it worked on his Debian machine. Erwe, can you just give us a quick summary of the Contributor Spotlight websites? Yeah, it's working and it's integrating the transfer with a pipeline with a preview website like the other websites. So nothing particular to say. Just a summary, it's working. Cool, it's running in production. Cool, I mean, you did great work so that's important to underline the things we were able to achieve successfully. There has been an issue closed as not planned. I don't remember someone had issue with CD release of their plugin. I don't remember oh and was it solved? Okay, there was an issue before the CD process was complete. Okay, but the release finally happened so nothing to say here. Any question for two milestone ago? Nope, so what were we able to finish for that milestone? There has been a Jenkins SIO account to be deleted. So just a quick recipe for everyone. When we have someone asking for Jenkins SIO deletion, I contacted them privately through the email inside account, not publicly explained on the issue by myself. If the user want to underline the email, that's their problem. The goal is to send them and tell them privately to challenge them to change the email on account Jenkins SIO. So they can prove, they can authenticate, change the email to something not working which by default disabled their account. Then as an admin, you check that they did what you asked them to do and then you can delete safely because I mean, they have access to the account. So whether they are the rightful owner, they demonstrated they can use the password and receive email on the associated email meaning they have full takeover account. There were issue on CI Jenkins SIO job for building Docker image of Jenkins, thanks survey. Can you just give us a summary on what you fixed or changed in order to solve that problem? Girl.exe was used in the Docker file to download the plugin manager on the wire, I think. And there was a TLS error. I didn't think why there was an error. I replaced the girl by the 4HL native. I said the big names. Yeah, if we invoke web requests. Brutal function and there are no errors anymore. So we're good. I haven't taken, I haven't checked why the girl.exe right. Most probably the TLS certificate package or equivalent was either removed, changed, outdated or misconfigured suddenly on one of the base image updates. So that I don't think it's a problem with us and using the native PowerShell client is really good and good enough. As I wrote, since the weekly Docker image was released, that means your fix is in production now. Thanks. We had digital ocean P80 rotation because they were going to expire in a few days, middle of December. So there has been changed. Nothing to say about this. The calendar event has been updated for in three, in 19 days. So every 19 days, we will send a new certificate for repo with Jenkins CI and we will renew digital ocean P80. There was an issue with Jenkins.io, not updated. Same, I believe RV, you work on that problem along with Kevin, can you give us a summary? I don't remember what happened and what was the fix, but I saw that you fix it. So. So there was Jenkins.io wasn't loading up new content that had been merged. So like the 2.435 change log, blog posts that were merged, stuff like that. So I raised it and helped the issue. Grave figured out that it was a stuck build and got to push through. Everything was loaded up and fine after about half hour. So real quick, nice and easy. I don't know, I don't know. I have the same, I haven't understood why the build was stuck for five days and it's still running, but yeah, I've killed it and the next one passed. Okay, most probably controller restart that went bad. Okay, cool. Thanks folks. Now for the work in progress. First, we have the SSL certificate for repo Jenkins.io. I'm not able to catch it at first sight. Five line, fifth. Oh, perfect, thanks. So we already discussed about this one earlier. So my proposal is that I'm gonna proceed with the let's encrypt certificates and it's so we won't be cooked just before Christmas by the problem that will let us the whole January month. Is that okay for you? No objection? Let's encrypt 19 days for now to avoid breakage right before Christmas. We'll check for one error or keep using error in January 2024. Is there any question? No, okay. Next topic by priority, migrate update Jenkins.io to another clouds. Or can you just give us a summary on that topic because I believe it's blocked? It's a pull request containing no change to upload the content on the orbit. It's up for his ready for review. We are waiting for some availability of Jenkins security team. So they can get some time to review it. And meanwhile I've started writing Jenkins on announcement. Just a minute, I'm sorry. Continue, continue. Jenkins on announcement proposal, project title number the correct term. Jenkins on one proposal who explain and describes this new this change. I'm putting it here because we want to ensure the Jenkins code will follow ready rights from this new one. Which is the case now, but nothing is processing it. So writing a check will ensure that it's the contract between infrastructure and code. Okay, I think you should go ahead with all the topics because the mirror. Yeah, sure. You wanted to go over? The next one, somebody clings for latest for Windows table and get Jenkins IO point to all the release. It was an issue happened on Jenkins.io this week, I think. I started looking at it and I don't know where this person found the latest thing. So I asked them where they found it, they say find it, so I can figure it out. So I can dig more into this issue, it's in progress. So this is a longstanding, it's a longstanding known behavior. I wouldn't worry about it. I think it's behaved this way since we did the packaging updates two or three years ago. And yes, I agree it's odd, it's unexpected it's a lot of work to resolve this because when Olivier Van Maan and I did this initially, we intentionally left this one alone. And so I would say we for now just accept that yes, they have recorded a valid issue and the valid issue should not be in our plans to work on it for at least months because it's just not important enough for to justify our effort. The web pages do not have that problem, right? Jenkins.io, www.jenkins.io never goes through that symbolic link. And therefore let's not, I would have proposed any, let's not spend our effort on it, let's move it out of this iteration or this set of this milestone and let it wait for months. Okay. Yeah, yep, good, Harvey. While looking at any type also, let's get the Jenkins.io is also serving C updates, some content. Yeah, that's, please, that's another topic. I just read your last message. That's expected. Okay. That has always been and it has been documented and I don't remember which script. However, I will disagree with Mark in the sense that the proposal from Herve is almost one year ago about switching from Blob, BlobExphere to AZ Copy, which is the official and supported way to copy data to any kind of storage accounts. I'm not sure if AZ Copy supports Blob Copy that has to be checked, but that command line supports properly seemling their references as Herve demonstrated on the update center topic, which is the tool we use. So Damien, are you saying that it might be fixed as a happy side effect of other work we're doing? Exactly, the only side effect I see here is storing a bit more data because it will duplicate the latest content. However, that's just a few hundred megabytes and paying for storage is close to nothing, especially with Blob storage. Herve, did I miss something on that part but I remember you commented it out and since you tried AZ Copy, that should be a solution. Yeah, maybe I wouldn't like the link, but if you said so, yes. The problem is the link exists for some and not for the others. So we can decide to remove the latest, but that's way more complicated to remove it once copied. Unless we find a way to exclude it properly from old script running regularly on update Jenkins.io. Anyway, that's more matter of, should we spend time on this? Now, later, I believe the good side effect secondary for that will be getting away from Blob X-Faire, which has been deprecated since at least 2021, if not earlier. So that's a critical piece of the infrastructure, this script and relying on command line that has been deprecated and not maintained is not the good thing. And I believe that Herve's proposal, since it's already one year, should be considered. So put the helpdesk 3414, next milestone for example. Yes, that means we might close this one as a duplicate of the old one and add a comment on the old one and put it on new milestone. Is my understanding correct and do you agree? Yes, if it results in current issue, yeah, sure. And if it doesn't, then we reopen this one. So my interpretation of that is, easy copy would be the thing not resolving this symbolic link issue. If we got easy copy and did not resolve this, that would still be a positive outcome. The benefit we're seeking is not fixed this specific issue. It's get off of block transfer and get on to easy copy. Absolutely. Got it, okay. Thanks Herve for asking the right question on the issue though. I don't know where their link comes from and that would be interesting for us to at least get from the source even if we update the links. Is there any additional concern, question or pointer here? No. I'm adding a point on the update center not survey about what you mentioned. Jenkins.io slash updates seems to have USA contents. Just we have not here because some might be expected or some used to be expected and aren't anymore which could explain the discrepancy in the last updates timestamp you saw. Is that okay for you to report comments with the links and the explanation on the issue please? And then we can search because that's a way to not forget Herve because if I can find or if I can ask Olivier for help for memory, we could point the reason why it was copied and why it could be cleaned or kept or updated. Any question? Okay. Marc, just I want one last validation. Is the scenario for repo Jenkins.io with let's encrypt certificates valid 19 days at least for now is okay for you given the closed deadline? Yes. So that idea will give us a valid certificate that lasts beyond the current expiration date, right? And therefore, yes, absolutely. That is very much acceptable. Herve asked the right question. Is it a problem to renew the certificate every 19 days? And I think the answer is for right now. It's the choice we have to make. We have to coordinate with JFrog. I assume they're the ones who ultimately apply the certificate onto the site somehow. Yes. Absolutely. I remember six months ago, we asked them and they say it was on their roadmap, but they weren't able to tell us because they could have a let's encrypt feature on their system on the platform, on their own with HTTP validation. Okay. So, let's see. But yeah, they should be able to provide that feature. And that could be a good way to give them an incentive because if every 19 days, we upload a new certificate at a given moment on time, their support will be annoyed by the frequency and that might help them to fix the issue at the future level. Great. Okay. Okay. Next item. We have a fork that tried to create an issue, an account. The email was gray listed from their email provider as I could find. So, now they have to contact their administrator or change their email. Gray listed. They need to contact their email provider. And help me with the definition. What does it mean to be gray listed? Does that mean they reject our email sometimes? Does that mean they reject our email all the times, but they use a different color than the usual? I never understood the concept of gray list and it looks like it's changing depending on the email provider or SMTP server. Exactly. That's the choice of the provider and they consider that gray email is neither spam, specific spam was trying to send any bad advertising or any hooker and the real good email coming for work or from your friends and the advertising one is in the gray area. So, dealing with the gray specific filtering is a whole work by himself and they do that like they want most of the time. That's a nightmare. I still don't understand the reason of the gray listing that makes no sense for me and every time the condition triggering gray listing are never clear, except they don't want to. They don't want to give you the rules. They want to keep them. It's to see if you behave correctly. So, they won't give you the rules to be... But what does that mean? It depends on the way of you. Yeah. I mean, yeah. Depends on you. Issue on their side or they can change their email. And as usual, without any answer from this user, I will close the issue as not planned next week. More, okay. So, you started updating the status of Confluence Publisher plugin. Can you give us a summary? Yeah. So, the suspension is merged, merged five hours ago and the plugin site doesn't yet show that it's suspended but updates.jankins.io should. So, I'll check updates.jankins.io just to be sure you can go on. This is under control and behaving the way the project says it should behave. Yes, it's already been removed from the updates.jankins.io delivery package because it's not visible on the HTTP site, updates.jankins.io at all. So, yep, it's successful. Cool. Thanks. Issue is modified by a spammer. So, the bulk updates feature was available for a lot of people and someone who thought they were smart decided to bulk update a bunch of issues, existing issues inside the issues.jankins.io. So, we were able to lock out the account of that person and Daniel helped us on disabling the bulk update feature on Jira. So, unless we need it specifically that we'll be disabled to avoid such a mess. Now, the problem is how do we roll back? The change is done by the chooser because the bulk update feature does not have a rollback methodology. And I personally don't want to try to tamper or play with Jira because I don't know how Jira work or even if I have administrator and that one is really complicated. So, Mark, we need help from people who know their way with Jira. Let me give you my latest status and I've got the action item. So, in the governance board meeting on Monday it was discussed in depth. And right now the recommended path forward from Basel Crow is that we accept that we are going to lose all data added to Jira after 6 December 2023 and restore from backup. Before the spammer. And the reason he recommended it is because one of the damaging changes was some bug types or some issue types changed from epic to non-epic. And by doing that, all the links were severed. Right? And all of a sudden all of it's, and there's no way to recreate all those links that were severed. Now, I've got to check with all the other officers. I've got to check with Vadek Filonier and with the security team to see if that's acceptable. I've also got an open ticket with the Linux Foundation asking them what is the backup that they have that precedes December 6th, 2023 at 2300 hours UTC, 23 hours UTC. Because I don't know what backup they have and that will be part of this question. So we've got to have further discussions. The board meeting discussion is not a decision. It is merely guidance that we need to come to a point where we say, all right, we are accepting either the loss of those epic links and retain the data we've received since then or we're accepting we've lose the data we've received since then and retain the epic links. That's the kind of decision point we're at right now. So it needs a bigger discussion. That discussion will likely happen in private email discussions, at least initially between the board and officers. And I will start that discussion because we've got to decide on a path and part of the decision on the path needs assessment of which epics were damaged. And that requires that I dig into things in much more detail. So this one is... If I may, there is an emergency request here that's contacting the Linux Foundation as soon as possible for them to send us the backups before the backup rotation goes far if they have retention. Right, and I contacted them on Sunday to ask that and I continued that LF Foundation interaction yesterday and we'll continue it today as well. Part of my worry is that they may say, sorry, our rotation has already rotated it off and you're out of luck. If so, that's the reality there. Because the problem here is not restoring immediately. It's us sending them sending to us the backup, the last backup they have encrypted because we have sensitive data, but we need a way to get the data before their rotation. Well, and so that's where I'm not sure what we would do with it because we have no JIRA instance that we can use to restore it. Yes, but at least that's an SQL dump. So we should be able, at least with the Jenkins security team between person who have access to the sec, because we need this to be, it's a sensitive data, but it's an SQL dump. So we can extract data or analyze it and we should ask this as soon as possible and then decide on the restoring or not because we need the data to be copied somewhere out of their system first. Okay, so let me, that's a subtlety I have not considered. Let me, I will certainly include that in my request. Great, thank you. Ask for. Be careful that the data must be sent to as few person as possible. So only you or Daniel Beck or Vadek Folonny or even to me. Right, it must be a secure destination and highly limited access. Great, all right, thank you. I'll continue that. I'll start that discussion with that topic. Thanks. The oldest dump to a secure location or people. Thanks, Mark. Oh, I have the car repair that is just landing. Just let me a minute to see what I can do. Can I need you to drive the next? If it's okay, I'll drive the next one because the next one is again in my voice, if that's okay. Cool, and Stefan Erwe, I continue to take notes, please. Actually, and I'm even happy to do the note taking. So that's... See you in a minute. All right, thanks, Damien. Where is my note taking page? Ah, here we go. All right, so the next topic is the JFRA, or the Artifactory Bandwidth Reduction Project. We thought we had completed it and JFRA came back and said, hey, you use 20 terabytes in November and we see that we think that the data is coming from cached copies of Maven Central that you are unnecessarily delivering to others because they should use Maven Central to get Maven Central resources. We analyzed the log file and saw that the log file, in fact, exactly matches what they said, that log files show that Maven Central is still cached and it's cached through a repository named JCenter and another one named this OSS sonotype thing. So when we remove those caches in a brownout, caches in a brownout, when we remove them in the brownout, we found some relatively minor issues that we need to add one or two additional cached repositories for a very narrow set of jars to our definition and then we can remove them. And my proposal was that beginning this Friday, you would go to production with that change. The idea being let's make the change. Now the question to the infra team is, are you available on Friday? We need Damien because he is a repository administrator or we need Daniel Beck, a repository administrator. Others of us can't administer the Artifactory Repository. So I think the question there is to Damien, can he be available on Friday? And if not, on Friday, if not, is Daniel Beck available on Friday? Any questions from others? Okay, so then I think we go on to the next topics. Next topic was diagnosed slowness when greater than 200 parallel tasks are parallel pipe, let's see, we're using line breaks to tell where we're at there. So this one, I'd propose, so I'm not sure that we need to do any further work on it for now, it will wait. Any objections from others? In fact, Damien used the code expansion on the new subscription for this one, so that should be good enough, I think. Okay, good, all right. So then next topic was migrations leftover from public Kubernetes to ARM64. Yeah, this one is on me. I forgot a few services that were left on public K8S when we migrated, but we looked around with Damien, all those one are really not as easy as we thought. Maybe the artificial caching proxy would be one of the easiest one because it's engineering, but with a really high level of fine-tuning by Damien. So we need to make sure that the IRM part platform will still be able to comply with those fine-tuning. We may have to do some testing. It's not straightforward. And as for key clock and in-depth, we need to make sure that the volume will be great because we are not in the same zone for IRM. We have to be in zone three in the East US2, but zone one, sorry, is on one. And all the nodes that we have from now and the volume mounted on them are on one. So to migrate those, we will need to first migrate their volume to a non-specific zone area. And that migration need a stop. So we did that once for weekly, I think. So we know the process, but still we have to plan it and to deal with it. So we choose from now to postpone a little and to make sure that we can do that straightforward. And as for my rabbit, the IRM image is not yet provided, but we are expecting it with the takeover, I think you said takeover on the project. I know that every proposed something, a pull request or an issue, I forgot, to help them to manage the IRM 64. You want to say more? I have to analyze this with terminal. I forgot this too, but yeah. So yes, we got stuff, but it's not easy and it's not as easy as it has been. So we really need to think that out. So it feels like this one is, it's okay if we pause it and come back to it later. I think so. Good. Next one was export download mirrors list to a text representation. It's still me. Go ahead, Stefan. Yeah, I said nothing from now. So now I have to speak. I studied that. The aim is to have a list of the mirrors that we're using for the Gage and Kinzer IO. And I studied that as a report from Infra. And in fact, there's a sub issue for that, just creating the work area in InfraCI for report. And I studied this issue with a pre-request, the skeleton with a script, a bar script, that is able to parse the page and to put that in the text file. But it's not good enough. We need to work and I need to work on that more to probably provide a JSON instead and to provide all the IPs with those URL. So it needs a little bit of work, but it's interesting. And what we did is we choose not to be specific to those mirror bits, mirror lists and to be more generic with Infra information. Like that we will be able to add IPs or something else and use that JSON to show those information. So it's better for later to be growing. Not sure my English is good enough, but that's the way. Thank you. Anything else on that topic? Shall we, given that we've hit time, shall we call ourselves an end or do you want me to continue us going through reporting on the other topics? Can we keep going? We certainly can. Done, yeah. Let's do it. All right, so next, the get.jancas.io migration from mirror bits to mirror bits parent. And now, let's see, I'm not sure where we are on the, oh, I can't scroll Damien's screen, but I could potentially stop his sharing, but continue, go ahead. So this one is migrating from mirror bits to mirror bits parent. I'm not sure we started that, but the aim is to use the new charts that Damien and Erwe did, which is an umbrella with a sub child. No work has been done on it yet. Okay. Okay, all right, good. Because this for the low has to use the little part of it as IRM to ARM, because the only one not able to move to ARM is my orbit, but the HATPD and the files are able to, so. Great, all right. Next, tune the node pool size. Tell me, we did work on that. We created a new node pool, Intel node pool, node pool, sorry, a smaller one, cheaper one, and we migrated all the services by tainting and using all those Kubernetes stuff. And our problem now is that it's using five nodes and we were expecting only three. And in fact, we dig a little with Damien on the consumption in CPU and RAM and we will be able to lower the expectation and you call the limits and the request in Kubernetes to make sure that we pack a little more and we're using less nodes. So we are making some economy, some saving and that's good. Our next one is me too, oh gosh. Go ahead. I started to provide some agents in ARM on Jenkins Infra this time. So for that, I had to create a node pool, ARM node pool within Jenkins Infra, so infra.ci.jenkins.io, it's done. We got that node pool, same kind of machine than the one in CI, so small one and cheap. And I started an agent definition in Kubernetes to be able to use it. So for now, there's a specific label. It was not working this morning, but it was last week. So I don't know, somebody touched something this weekend or the wind, I don't know. But we need to go deeper in that and we plan a migration of infra.ci.jenkins.io in ARM too. The problem will be the same than the one I spoke before with the migration of the volume and that volume will need to be zone friendly with any kind of zone. And that means that we need to prepare a stop of Jenkins for a while to deal with that migration. So we need to plan. Great, next was anything else on infra.ci. Okay, next then, set up the secondary Azure subscription. So as far as I know it, we are now consuming from the secondary subscription. Yes, it did everything in Terraform and it's awesome. It's working nice. Great, okay. And we were under budget for November. The under budget in this case means the use of the secondary resources did not affect the budget. It was only a relatively small amount but we stayed under budget for November of 2023 as well. Even without the secondary subscription. Next topic, sponsorships, digital ocean. Envy, do you want to share with us what, how the progress is there? Oh, we touched a bit about it in the beginning of the meeting but yeah, digital ocean is happy to renew their sponsorship. So we are extremely happy about that. They ask us to wait until the end of the current sponsorship. So we'll, I filled their form but we will wait until January to ask them to push them to give us more money. Excellent, thank you. Special thanks to digital ocean for their ongoing sponsorship. That's wonderful. They will probably contact us to get more peace of writing about how we are using and enjoying digital ocean. Great, thank you very much. Next topic then was Packer and Goss version tracking. Stefan, I think this may be you. Yes, that's my little nightmare, this one. I'm still on it, I'm working a lot. It's the Windows Go Sport. Still not a pen reading but almost there but it's time I say that I have a week more. So I did both pull requests right now. The one with the ghost version, Windows Goss and the one with the CLI matching those Goss version in Windows. I did find out that we need to have a pause before launching the ghost without the pause. It's failing. And I had to have a retry too. So sometimes it's going through the second or third time, whatever the timing of the pause is. That's really consistent. But at the end, we are sure that the image is good because all the tests are going through. So that's the way for us to make sure that what we provide is perfect. The problem is that while we were building that, it's taking a lot of time. I'm on it. Thank you. I'm not happy because I'm taking way too much time. Great, all right. Absolutely not, I'm happy with the other points. Okay, next then is redirect the Chinese pages to English pages. Kevin Martens, I think this one's for you and for me. Yeah. So I'm at the point where I've installed Minicube and a couple other pieces to like the Helm charts and Fort repose, but I got stuck with some of the operations. So I've been looking into the rewrite call and trying to determine what the options are and what works. And I've got the list of resources and stuff that I found from Stack Overflow and a couple other places for my own research and information. But yeah, I've got to that point. And then yeah, we're just looking to sync up at this point between myself and Mark. And once we're able to take a look at it together and kind of walk through it, then we'll have an idea and plans for next steps and where we need to go. Yeah, so the bad news is Mark has not done his part and probably won't do it for quite a while. Damien, since you go ahead. Yeah, the good news is that now I hope I won't have any more disturbance on my personal schedule. So I should be able to take some time on the upcoming days to help Kevin. So Damien, there's a, I need to go back a little bit since you're here. Let's go back to the RemoveJCenter and oss.sonotype.org release repositories. The proposal right now is that we go into production with that on Friday because the changes we need to do are relatively small and the brownout was quite successful. Are you available Friday or do I need to go ask Daniel Beck for his help on Friday? I am available Friday. So that's okay, we can proceed. Great, all right. So let's plan for it Friday at, I believe it's 1pm UTC and I will get the announcements out. So let me make myself a note there, Mark. Prepare announcements for the 1pm and we can actually make the additional cash even today. So we're correcting the cash so that it is publicly readable is a safe thing to do immediately. Okay, just need to double check with Daniel later today or worst case tomorrow, just in case. Perfect, thank you. All right, then we can go back to the end of the list. Okay, so the last item is Scaleway Sponsorship. If I'm not mistaken. Yes. Stefan, I believe you didn't have time, I had time but I didn't have any feedbacks. The link for the form is not available anymore. There is no way to subscribe to any open source area. Still waiting for some feedback from them. I know that they are going through a lot of changes inside internally, so maybe that's why. Okay, I got a proposal. We keep that until next week, last chance, and if they don't answer, if you have a negative answer, we close the issue and we can change that issue to OVH Sponsorship. Maybe appealing to one of their concurrent might help in the future, especially since OVH has a really strong platform as well. Is that okay for you? Yes. Switch to OVH Sponsorship. If they still come back to us in one or two months, then no problem. If they want to give us credit, we won't say no. But I believe no need to spend too much time after what you tried. And yeah, let's try other sponsors. Okay, just something quick to add about the sponsored subscription on Azure. So I'm walking and we should soon see trusted CI, third CI, and CI Jenkins.io having all their agent on that new subscription. I've prepared the work on my local machine and I'm gonna deploy this today and tomorrow on the go if I don't see any other problem. Work in progress is, I'm switching ACI right now. That's all. Just a tiny update, I'm adding the text here and the right updates. Cool, outside this, that's all for me. Do you have other topics, folks? No, okay. If that's okay, five more minutes to check the last issues because we didn't have time for that last week and we almost missed the repo Jenkins CI. Oh, a lot of triage. So you said BOM will be postponed. Chishunter, we took SSL certificate, we have this. DNS domain Jenkins.io expire. I believe Tyler is in charge of this one, Mark. Is that correct? He is and I think that it will happen just fine because the last one that I raised for this happened within the time that needed. So you're welcome to assign this one to me Damian and I'll take ownership of it to assure that it happens promptly when the time comes. There's no reason for it to be in the team's backlog for any particular milestone. It will, I'll keep an eye on it. And if we get less than 30 days, I'll talk to Tyler. Okay. We had an issue opened by a CloudBees colleague about IPv6 support for repo Jenkins CI.org. That's a good catch. Right now we don't have a DNS IPv6 record for that. So we have to ask GFrog if they provide public IPv6. I believe they should and it might already be the case. So if that's the case, they only have to give us the IP or the list of IP and we can add the DNS record on our own and voila. So since it's a quick one, since it's only one email to GFrog and since we are currently exchanging a lot of email with them, I propose to take it and ask them, is that okay for everyone? Yes. Okay, I'm removing triage. What was the next triage? Suspend Confluence Publisher. I believe that's the one currently part of the milestone. So I just need to remove the label later. And CD release, we have a new issue about CD release failing. I propose to add it to the milestone and we'll see it's a day-to-day walk. Is that okay for everyone? Okay. Do you have another topic or thing you want to mention? No? Okay, so I propose that we stop sharing, stop recording and see each other next week. I'm available after stopping recording if you have private questions. For everyone else, goodbye and see you next week.