 Hello everyone, welcome to the Jenkins Infrastructure Weekly Team Meeting. Today we are 27 of June 2023, around the table we got your servitor, Dimon DuPortal, Erwe Le Meur, Mark White, Stefan Merle, Kevin Martins. I don't know if Bruno will be able to join. Let me comment out his name. And that's all. Let's get started with an announcement. So the weekly 2.412 is out, at least core package and docker image, core package and docker image, I assume. And change log is out as well. So as far as I can tell, complete. I haven't run the full checklist, but I've confirmed the container image is there. I'm building with it now. Looks great. Efficiency. Nice. So ready to roll for us on the infrastructure. Didn't see any show whatsoever. I'm saying that as an announcement tomorrow. Don't forget we will have a LTS release tomorrow. So please don't break the infrastructure. Dimon DuPortal, please. I'm talking to myself as usual, because with the person who tends to break the infrastructure the more, I think I'm number one there. Do you have those announcements folks? Nope, I don't either. So I guess I did have a question. We've got a pending poll request to use Maven 3.9.3 on the packer images, but should we wait a day and not merge that until after LTS tomorrow? So can we merge it? If the checks are okay, we can merge it without any risk because it will not be deployed to the production. We will need to trigger a full release of the templates. So as soon as possible is okay. But yes, better to wait for at least under LTS tomorrow for triggering that release and deploying that to the whole system. I had forgotten. It's a multi-step process. It's not just declare the change. It will then propagate through multiple steps. But good point. Good question. Thanks for raising it. That's always better communicating. Let's wait before deploying Maven 3.9.3. A note about this, so I'll explain this discover painfully that kind of problem. As a reminder, the usual download mirrors for Maven projects are only hosting the latest available version. So as soon as a new version is there, the previous one is unavailable. So if you want to have something stable over time, such as a Docker file downloading Maven, then you should use one of the archives. So you go to Maven download site. You go to check older versions. And here you have a set of mirrors that have the whole history or at least enough for sustaining one or two versions back. Maybe mirror it in ACP. And no, we are ready. So thanks for the proposal, LRV. But first, that's a complicated topic, but we are fighting about abuse. And that one will be abused. And above all, that will forbid external user to use it. And adding it in as a transparent mirror from a curl is absolutely quite the problem. Because it's not Maven, it's not Maven downloading Maven that ACP has been built for. It's curl. So that means you need to intercept the curl calls or declare an HTTP proxy compliant with the proxy protocol. That will require a lot of work. Anyway, that's not a problem. We already have solved that problem one year ago and that has been solved by Alex on different components. So next week, we will have a weekly release as usual. It will be 3rd July, is that correct? And next till yes is tomorrow. It's a holiday in the US the 4th of July now. Correct. Yeah, it's a bank holiday here. Independence Day. Thanks to our British friends. I guess the question mark is the Independence Day going to influence the weekly release or our European friends are in up? Yeah, I think I don't see any reason to change anything about that. It will release automatically. It will build automatically and will do the changelog. It's an easy thing to do, no problem. So no next security release, no next major events. So unless you want to add something about an enhancement calendar, you can proceed to the work that was done. Okay, let's get started. We finally found the root problem of the certificate not being renewed on our virtual machines on some of our virtual machines. First, there was a dead link slash user slash bin slash third but was the link to snap. So when you were when that nice third but renew dash queue crown tab was executing, it was trying to call snap, renew dash queue, which of course triggers an error that say, Hey, I don't know the command renew. Removing the same link was not enough because the crown tab have a path of their home, which is different from the user. And of course, on some of our machines, but on all our machines, it's installed on us as a local bin, which is not on the default path of the crown tabs. So yeah, even fixing that the next error was set, but command not found. We fixed that by specifying our own crown tab using Puppet. So we control, we now write logs of the renewal. So next time we'll be able to see the errors immediately the day after we control the absolute pass and all the parameters. So we tested it and it's renewed 24 hours later, all the certificates, including updates, junk in SIO, but archives on the third one. I don't remember which one. It's in the issue. So it work on three different machines. So I hope we should be okay with that. I can say 100%. Let's see with the monitoring in the future. Is there any question about that topic? Sorry, I messed up with that issue. Yep, let me update. Okay, any question about the SSL certificate renewal for updates? Nope. Thank you for doing it. And congratulations to all those who did not interact, intervene interactively to hide the problem again. Next issue we were able to close. We were able to, so we received the email notification from Fastly about Apex domain junk in SIO, not the website, www.junkin-sio. And it was trying to get a certificate for junk in SIO. And it was failing to do so. So we diagnosed the issue and we created the missing DNS record. And that was a rabbit hole that we were able to close. So we ended up managing everything properly. In the same way, everything running on Fastly is now using a DNS record for ACME and the proper CNAME record. All of them managed as code on Terraform in an automated way. So we imported, we changed, and now it's done properly and Fastly was able to successfully renew the certificates. There is a lot of different causes that last from, that happened during the past 18 months that leads to that issue. So no need to go back and pass more time. You have details on the issue. It's fixed and there should be no problem. If anyone has questions or want more details, please ask. Okay. We had, so next issue, we had an outage on DigitalOcean Network that was particularly on requests coming from or coming to the US East 1 area of AWS, including repojunkinci.org and our platform with the DigitalOcean Network in Europe, where we have some of our agents. No builds were armed, but as a matter of safety, we disabled DigitalOcean for one full day from CI Jenkins.io until the outage was documented on DigitalOcean pages. That's what the documented is on the same area as what our monitoring shows. So we were right to disable that. No impact because CI Jenkins.io is also able to schedule workload on AWS. So that was quite transparent to the users on CI Jenkins.io. That was also an opportunity for a Kubernetes cluster grade. I will tell more about this one later. Finally, the fourth task completely finished. We had a user having issues with their plug-in, permission with the plug-in. So we gave them instruction and since three weeks we haven't heard back from them. But we did everything we could. Everything is green and they should be okay to proceed. So we closed the issue. I hope they will open if they see any more problem. The main problem was permission on the GitHub repository of the plug-in. They didn't have the proper permission everywhere and the team helped a lot on pointing the proper resources for the repository permission update. Any questions? Remark on these four tasks? Okay. We had an interesting amount of account issues. Most of the time people that seem to try their own Jenkins and they end up on our help desk. I'm not sure why. I haven't checked the indexing status of account Jenkins.io. Could it be that with the new IPs that we will be indexed on Google? I think that's worth checking. But there might be a reason why the whole end there. I don't know. Why do we have repo carouselabs? I looked. It was already done last week. We can't hear you, Ervish. I saw it that you were talking. Can you repeat, please? Yes. No result in the first page of the result for account Jenkins. Okay. There is a weird reason here. The carouselabs repository we already discussed last week. As far as I can tell. I added the milestone because I removed the exception in the settings for the artifact caching proxy which was left after the resolution of the testing on Maven. Cool. Thanks for thinking about that. We forgot about this. Is there any other task on this one that you had in the pipe or it's complete? No, it's complete. That's why I didn't reopen it. We might now need to think about things other repository like this one. Not really like this one, but it's the exception discussion for later. We don't have a problem with this and we can add more exception if needed. Let's finish the artifactory bandwidth first. Thanks, Ervish. Let's get on the work in progress. First one, Kubernetes 1.25 because I mentioned it earlier. Since we disabled Digital Assign for at least 24 hours or 30 hours, we said no one is using the Digital Assign clusters. One is the ACP used by the second cluster which is used by CI. Disabling from CI makes no one use both clusters. It was a great opportunity for us to start the upgrade to Kubernetes 1.25. Not only if we broke or if something went wrong, that's not a problem. No impact, so good exercise to get started. I can say the same for the other clusters in the coming weeks. That was also the opportunity to use the credits that Digital Assign are granting us for free only for using the highly available Kubernetes control plane. Great opportunity to break the service because switching to HA is shutting down the control plane for a few minutes. So we had a planned outage but no one was using it. By the way, that's a funny one. The credits for using HA is an incentive they gave the users, Stefan and I discovered that because in fact it forced the user to migrate the control plane to the new network architectures which means they have tons of users using the low cost single not HA controller with old network and they want to get rid of that old network. So the incentive is they give credits for people to use the new HA system. That's a nice incentive for migrating legacy networks. Yeah, and to accept the downtime of a few more times. Yes, true. Exactly. That's easier to accept downtime because then you won't have any more due to the HA. So everything went fine on the upgrade. So that means the next step I'm proposing and it's challengeable as usual. That will be the EKS clusters. So that one might have impact. We need to announce it to CI Jenkins IO maintainers because during the upgrade of this cluster we won't be able to run any bomb builds. The other build will be run on digital ocean because that's the opposite of what we did last week, but no bomb build. That's not a problem. Not a problem. We commonly only run full scale bomb builds when we're evaluating a pull request. So not a problem. We just have to announce it. Okay. Need announcement. Is there any question on the Kubernetes 1.25 topic? Right now it's hard to evaluate the downtime and the availability for us. So I don't know. Stefan, you are off starting Fridays, that's correct. So definitely Thursday. Okay. You will be working Thursday? Yes. Okay. But yet anyway, you won't be upgrading Kubernetes 1.25 to a way I guess that's you on high. I can have you on the phone in the car going to Paris, but that will not be quite useful. Hervé, I know you are off Fridays, that's correct. I will be off Monday. When are you back, Hervé? Friday to Monday. I'd like to ask for Tuesday too. Okay. So I guess either I do it alone Friday morning or we can move on next week. That will be the risk. So I'm at Friday morning and I will ask you, Hervé, for a review of my plan Thursday if it's okay for you. Does it sound good, Mark, given we have LTS tomorrow that might generate bomb builds after the LTS? It might, but it's quite harmless for us to say. Well, the bomb builds it will generate will be very low impact. They won't be the full test suite. The ones that would hit hard are the full tests. And there we just tell people we'll be offline while we do this upgrade. We need to do these upgrades. I'll be able to review it on Friday on the road during a post or something. And also this topic is a little bit less odd now that since Digitalison was the first one to switch to 1.25, the other a little bit later. But we have also AKS. And AKS is required in order to finish the upgrade from Ubuntu, Bionic to Jami. We need to have Kubernetes 1.25 because that one unlock worker using the new Ubuntu. And that's why I started working on this. And the thing is that we need to migrate first Digitalison and AWS before Azure because it's less prone to problems and it's a linear complexity. So that's the reason why I privileged. Otherwise, you are correct. If we didn't have the AKS with Ubuntu 22 challenge, I would have delayed that one in a few weeks. Good point. Really good point. Is there any question? Nope. Okay. What did we have? Proposal for application. So cluster, survey, your turn. That is the most important one for now. Cluster migration. Can you share the status for this one before closing? Yes. So I've done a lot of cleanup and follow up after the migration of every services. Almost all of them are done. We have some additional tasks which can be done later. We've deleted almost everything on the ProdPubic.js cluster. Your orbit is still running on it. Damian made an announcement about it last week. And we've planned to destroy this cluster today. At the end of the day. Almost three years for this cluster. Three hours of uptime. Motion three hours of uptime. Deletion to be done today. So based on that, if we delete it either today, even if we are late and do it tomorrow, that task will be closable for the upcoming milestone. Is that correct? I would have wanted to do it today. I'm sorry for that array, but yeah, that was a bit short. But so congrats. That's a huge win. Which give me the perfect transition to that issue. So after the meeting, I will also try to, that one most probably will be merged into an existing one. Now the next major step we have with that new cluster is to ensure that IPv6 is enabled for GateGenKinsey. Two reasons for that. First, we don't have overlap and everything has been set up and it's better to do it as soon as possible. And even with that, we might have missed element, but we can fix it. And that new issue has been issued. So thanks, thank you and thanks, Padek. That was an email from a user. Since we have migrated GateGenKinsey to the new cluster, and since we have IPv6 records, the RFC compliant clients, such as Curl or some Java client, are trying the IPv6, but it doesn't answer. So depending on how the proxies are configured, some users see the service as broken. Oh, we've had a lot of delay for the fallback too. Yes, absolutely. So we need to fix that IPv6 problem. I checked a bit. We don't have a lot of elements to solve that issue at first sight. So that's why I propose that we work on this one as soon as possible once we have destroyed the old cluster. Is that okay for everyone? So the priority is solving the problem with GateGenKinsey. We have other services that could benefit from running using IPv6 publicly, such as AccountGenKinsey or the other public services. But the priority here is to get GenKinsey to solve the issue that has been described by RV. Is that okay or that's absolutely challengeable, but I propose that we scope to this one and maybe keeping that issue open. And then we will follow up on the existing issue about IPv6 everywhere. Yes. As a reminder, IPv6 is the default outbound system in India since September 2022, which means with this one we will help a lot of our friendly Indian colleagues. Any other question about on the IPv6? Okay. Yeah. In that case, that's the person from Dutch Telecom survey. But yeah, makes sense. That's their specialty. Artifact caching proxy is unreliable. So yesterday I was able to finish after some fix ups and fix ups of fix ups and fix ups of fix ups of fix ups. But now the Azure Virtual Machine Agent of CI GenKinsey are running on the same network as the ACP Azure. So they change region and they change the way to communicate and they have way less latencies because there are two subnets of the same virtual net now. There is no cross virtual net changes. So we are ready to continue to retry enabling ACP on the ATH. I'm not sure Mark if we should bother Basil or start again from the late work he did because the pull request where he tried has been merged without the ACP enabled for most of the ATH builds. Only the first, there is the first quick things that generate a jar file which uses ATH today, but all the big builds are not. So that's worth trying again. And if it works without issue, we can close the issue. Otherwise, we can start diagnosing why is it failing again, but that most of the issue are TCP connection error, which is typical from the network overlap. Now we don't have the network overlap now. So I'd say let's explore it ourselves and we can submit the pull request to propose it and watch how it evaluates. The same network as ACP. Let's evaluate ourselves first. Is there any question on that topic? Just a note on this one. I published after a bad copy and paste, I published the datadog key in clear. I realized that and I rotated the key less than two minutes after and I didn't cook any metric sense with that key during the two past minutes. That key was only used by CI Jenkins IO agents. So it appeared that some of us have a Git guardians or service like this one that detects the key, which is a really good thing. So I'm sorry for that because I didn't communicate to the whole team on this one. So I will try to do better next time. Yeah, sorry. But the key has been rotated. So no worries. The risk is, worst case someone was able to send metrics to datadog. And I'm so pleased to see that you can make mistakes too. I'm not a machine. Thank you. That's what a machine would say. Sorry. On that topic, CI Jenkins IO. So now I have everything ready to prepare the migration of CI Jenkins IO to a new virtual machine in a new network with a new generation of CPU, etc, etc. So I'm working on bootstrapping again that virtual machine now. And the next step will be migrating the data file system and plan a date for that, most probably not that week. So I assume next week, Tuesday, eventually Wednesday next week. But I need to work on that topic for that milestone. Is there any question on that new CI Jenkins IO machine? Okay. Mirrored Jenkins CI is missing some necessary metadata file. Important topic here. User complain that it's really hard to use the project behind Fastly in China. There is no Fastly node inside China network, which means for Jenkins.io websites, the plugins website, but also pkg.jenkins.io. And that's the problem here. pkg.jenkins.io is the official way to install packages for installing Jenkins on Red Hat or Debian. That's quite the issue for them. So they realize that we have the mirrors and they try the mirrors, but the mirrors are missing some indexes because the package index is expected to be at pkg.jenkins.io. So we have a problem that we will have to solve here, which is close to another issue that we had a few months ago and we weren't able to solve. The index here proposed to download files from getjenkins.io slash something slash dot deb dot rpm dot whatever. But it redirects to a mirror, but some of the mirrors don't have unlimited storage in time. So most of the time it's not a problem except when someone wants to mirror everything from the beginning. And in that case, they start to run on issues. So we might have to think about pkg.jenkins.io availability for a China user. I don't have an easy solution right now. It's a complex topic. But the good thing is that discussing with that user helped us to discover that it looks like we already have mirrors available in China that we don't know about. The user mentioned them on the issue. So at least the takeaway from short term will be adding these other China mirrors. That's for me, that's really good new. But yeah, the whole problem of the man on the middle possible attack. How do we manage all of this? That might be complicated. So the problem is yes, we have digital in India, but you have the great firewall. It's a matter of network latency more than location. But that's the default. And we have a mirror inside China already. It's hard for our China users to download. Mark, I think it's worth adding that to the next, to bring that outside the Jenkins infrared team at least, especially with the context of us working on removing the China Jenkins website. We might need to start searching for a solution equivalent to Fastly, but that could allow us to project local local files or local CDN in China mainland. Is it worth it? Do we have users? But given that the user here is taking time and it's not the first time, I guess we have to start that that question on the broader, yeah, outside the borders of the Jenkins infrared team. Agreed that it's if we're going to do a different, if we're going to a different form of mirroring that needs a broader discussion than just, than just infra, absolutely. Eric, do you happen to know if the new Cloudflare, if Cloudflare has CDN nodes in China? I don't know, but I'm on the, on the page saying that they got one, the Cloudflare China network. So that could be interesting to have both Fastly and China and Cloudflare then outside the Jenkins infrared team. So if it's okay, we'll comment out saying, is does it make sense to ask the user to help us to, to bring that subject because we need a kind of sponsor that is able to drive it from China. They won't be left alone. We will work with them, but we need someone there to help us. As far as I remember, this question was raised a few years ago. We started with some contributor, but then the contributor stopped being there. So that was hard to have a sponsor locally. Is, is this a place where Cloudflare is also a candidate for at least consideration as, as one alternative? Alternative or complimentary. Complimentary. Okay, got it. Both are okay. It's just a few, a few more lines when we deploy, but that, that can be an idea. Also, we have the same problem with the update center, but again, that's a topic I don't want to spoil everything right now. Okay, thanks. Thanks, Stephen. And also, it's worth contacting Fastly if it's okay for you. I don't mind contacting them asking, okay, we have that problem, want to project in China, do you open to have project ideas, POC? Because that's, that's important. That's, that's a legit use case. And we're not alone with that. Any other question here? Okay. We'll comment on the issue. That issue will go to the backlog because I don't see any direct action for us. We, alas, we cannot help the user as it, unless someone has an idea. And in that case, I'm happy to help. No, okay. Okay. Just to avoid spoiling too much, next topic I propose is data dog integration on CI Jenkins.io to observe the instance and the builds. Harry, can you give us a heads on that topic? So you resolved the issue of the communication between the data dog agent working on the CI controller working inside the Docker container. From that, the plugin has been configured for working correctly. But now we have two set of trace. So when we previously add on, which probably comes from the, the catalogs collector, why is the new one is the one collected by the data dog agent. So one is correctly recognized as a Jenkins integration and has everything every event, every tags, everything. And the other has nothing. I've displayed my custom parser and it's currently shows this log appearing as error. And they don't have any global tags in them. I've tried to add a global tag in the plugin configuration and say that Jenkins.io. And while the patent got back, I saw the tag in the data dog tag. But anyway, since the other source did not add any tag, I think we have to close this collection. I don't know if there are other container collected on this instance. I don't think so. No, we don't have others. So that makes sense to disable it on the instance. That's okay. I have to do that. And I've added some point to the issue like infrastructure integration. And the last point, but I think I won't need it for closing this issue. Okay. I'm not sure to understand what is infrastructure integration. Let me open the issue quickly to what I meant like that. Okay. So, okay. Is it okay during the upcoming days? You will try to describe what you want to do with this one. You are having network issue, right? I'm the only one. No, it's okay. Okay. So I see your way. Can you hear us? Okay. No problem. So then the next step for you will be to a message. Okay. Cool. Thanks for this one. There is a nice CI integration section on Datadog that is now connected to CI Jenkins.io that shows traces. So we can now, through Datadog, see for each build of CI Jenkins.io, how much time did we spend on the checkout, on the SH steps, on allocating agents. There is a lot of pipeline build errors or time, or there is a lot of information already available now. So for everyone, if you want to diagnose bomb build slowness, that's the next step. That works very well. I don't know if we can open it publicly or to just, we might need to create restricted account, but that could be useful for everyone. But now we have a full observability. Thanks to that work. So thanks for that. That's really useful. I know how to expose dashboard publicly, but I don't know if we can restrict access to that. If we have dashboard of only CI Jenkins.io, that's okay. We just have to be careful not having infros CI Jenkins.io publicly. Just about these logs, we have also the logs from CET-CI and Prestid, but they are ingested like the previous one as container logs. And these are container logs, not build logs, which is way different. You don't have credentials in the Jenkins coin, but you might have credentials on the build logs. The issue with them is that they aren't recognized as Jenkins integrated logs. Absolutely. And in order to do that, that's a setup puppet to the data.agent that collects the logs. You have to do this that way. That's the alternative to the plugin as far as I can tell. Is there any question on that topic? So I consider we'll work on this next milestone. Matomo started, I've tried to draft a plan under a summary of what is needed to host our own Matomo instance based on all the information that gave in shared. So the next step will be start the work. The first step will be ensuring that the image, the Docker image to avoid persistent storage and to have some plugins and setup in the image is working and published. Then we will have to set up on Terraform the MySQL instance and the Elm chart and we should be able to have a first installation. So we only have a technical task to do now unless I missed something. So I propose to keep that issue on the upcoming milestone and we will do tiny steps by tiny steps. I don't think we will be able to finish it for this milestone, but at least we can get started on real tasks. Any objection on this one? Okay. Let's go. MySQL instance plus Docker image. There is a, yeah, in fact, Gavin already gave us a lot of information so we don't have a lot to think and more to act. Thanks Gavin for that. Stefan, can you give us a head on the IRM 64 migration on public, the public cluster? What's the status? Did we load? Oh, just a message from Stefan. Okay, he lost the internet. Okay. I'm moving tasks. My turn to share Ubuntu 22 upgrade campaign. So we did upgrade the old VPN that allows us to access third CI and CI Jenkins IO. So VPN is done. The CPU Z machine as well. So VPN, we had issues that has been solved. So summary is be careful. Sometimes default settings change between distributions. So the default forward policy for IP tables was a low on Bionic, drop on Focal and is again a low on Jami. And of course, they have a nice migration assistant that say, hey, if you have a setup, I will keep it. So once it has been set during the first upgrade step, it was set to drop. It was kept as drop. Even though the migration assistance was saying no change. Yeah, it took me quite some time to understand why the VPN was working, but I wasn't able to reach CI Jenkins IO, a third CI Jenkins IO from the VPN network. Fixed and documented. So yeah, two hours of VPN outage. Sorry for this one. And the next steps are CI Jenkins IO. That will be the virtual machine migration and AKS. So we have a CI Jenkins IO VM migration, AKS 1.25 cube. And the last one will be updates Jenkins IO. That's the most complicated. I don't have a plan right now. About that airway, you had a nice proposal that we haven't mentioned there, but that was mentioning it because that could be the solution for the open to upgrade for updates Jenkins IO, at least for the part in charge of the update center, not the PKG. Are you okay to share your idea? Yes, since Cloudflare R2, S3 like object storage blob product has zero egress fee. I think we could try to use it to distribute the update center system file, which is consuming a lot of common with everyone. In detail, we seem to have what we need to have a S3 compliant storage with that. They also propose an open source sponsorship. So it could be nice to ask them to submit an application for that. I took a look at the existing issue and CIP. I have some point to discuss. I think, like for example, if I understood it correctly, we have updates at Jenkins IO serving an HTTPS update, but we have also updated Jenkins CI.org, which is serving these updates without any TLS for Java version 8. So I was wondering if we have to keep that and if it's still the case today. Yeah, and the answer to that kind of question is let's talk to Daniel Beck. He has forgotten more about the update center and its complications than I will ever know. Absolutely, and that's worth having that topic in parallel, because if the answer is we don't have to maintain the no TLS for whatever old version since we are using GDK11, if we are able to say, oh, okay, we communicate one time with a blog post or one community, and then we add a redirection update Jenkins CI.org will be an HTTP permanent redirect to the Jenkins IO with HTTPS like we do for Jenkins IO website. And that one would help, but good point. I forgot about this one. So yeah, nice check. And the other part of the study is you have to check and trusted what exactly is the pipeline running since as I understood they aren't stored as code. And until today, I didn't have a proper installation configuration of my new machine so I couldn't access CI or trusted. Yep. So if you need, because the non-infraars code jobs on trusted CI are in fact just a bit of glue, there is just a few environment variables and it runs a shell script. And the shell script is available in GitHub. So if you need, I can take screenshots. Yep. I saw the generator on publish, but if it's only glue, I'll see when I saw it. Data center updates. But the idea is that that shell script will be added to AWS S3 commands. In the first time, we can absolutely start with the air to object store. And the command would air sync the generated JSON to the current update VM and also copy it with AWS S3 CP to the new one. And then we can start testing on that new system. As I was saying, the no egress fees is really appealing for us. But also that means high availability for the update center. No need to maintain a virtual machine with Apache server. And we have Cloudflare as a CDN network everywhere, including China. So that could be a double win here. So that's really a good idea. And even if we have to pay, because we'll have to pay a bit, the costs for the update center use case, that's really nice catch by our way, because we don't have a lot of storage. It's expensive if you have a lot of storage. For instance, PKG Jenkins IO with a gigabyte of packages. I don't know, but at first sight, it's more expensive than AWS S3. In the case of the update center, that will be worst case, 40 bucks per month compared to the six to seven K per month of bandwidth, not mentioning how much we spend on the virtual machine and the storage. So clearly that will be a win. So good catch. There is an issue about update center migration to another cloud. If it's okay, we will add this one to the milestone and I will add the link here, because the topic is only the update center part of the virtual machine. We will sync on that part. The second part though is PKG. And for that one, we might have to work on, yeah, there is still some work to do for Ubuntu upgrade that will require help, I think, from core maintenance. Yeah. Isn't PKG the one that builds the installer for us? That's the one that has that strong coupling right now to the Jenkins core installer and the change from Ubuntu 18 to 22 is a dramatic change in that area. My understanding is that Ubuntu 22 has been done. We cannot use Ubuntu 20, but never in real life. So we need a way to test it without triggering a release that will be safer for everyone. I think that's the most because maybe that could just work out of the box. If it's okay, I propose we finish the cube IPv6 and with airbay whenever we'll be able to start working on that part, that will be a subject I'm volunteering to take unless someone is interested. Good for you. So Stefan was back and his internet is dead, so I consider, I don't know the status of this one, but given he will be off, we should be able to either hand over this one if he has a direct item that we can do, most of the time, that may be migrating services. Otherwise, it will go back to the backlog. Okay for you? Yes. Backlog or handover from Stefan. Also, that one is on the milestone, I added it so we can mention it. That was related to the need to remove the IP restriction to access trusted CI through the SSH passion, the bounds machine. The work here needs to be underlined because we need to plan what has to be done in order to use the VPN as a way to restrict access to the bounds machine or find another solution. I propose this one back to backlog for now, because it requires a lot of network issues and we have to finish the public network, the cleanups, IPv6 before going on this one. Most probably, that one will come back to us really quickly because might need to be done for third CI. Today, third CI is inside the subnet of the whole overlap network and is available only through the legacy VPN. So we will have to migrate that instance on a new network and we might need or might not that has to be discussed with the security with a threat model session. Do we need to create a dedicated virtual net with third CI that will be the same pattern as trusted, meaning we need the VPN to have a way to reach these separated networks. Can we move third CI on a subnet of the private network like in third CI or release CI or do we need another solution? But most probably the pattern will be same as trusted, eventually in the same network as trusted. But in any case, we will need to connect the VPN and create a network peering that requires some work and planning. So that's why I propose back to backlog for now. We don't have a blocker here. Is that okay for everyone? Yes. Last item for today is assess architecture bound with reduction option. So Mark, can you remind us just to help my brain to focus on that topic to context switch? What's the status for this one? So we've got to I owe a whole request proposal and the beginning of a discussion about the changes to the plugin palm and the Jenkins core palm to switch to generate an incremental that we can use to test drive the switch from public everything to public only the components we build that we deliver and lots of work still to do there. We have to prepare on palm builds. Oh, okay. Now my brain is inside. Yeah, so it's we know we need at least two and we've got we expect lots of conversation about this and a number of places where hey, the info team including me are not experts in Maven Maven configuration and consumption. So we'll look for coaching and help from others in the from core maintainers and others. Thanks Mark. So on our side on the infrastructure as administrator, I would need to create an alternative repository tree with one or multiple private repository and that alternative tree should be used as a support with the parent pumps. The goal is to apply the proposal we made. I've started the documents that Mark and Derwe have started to review. I will want to review it a bit with a short core of people including maintenance and as soon as we start to have a consensus on this one, I will take care of adding it to the issue so we can start having a public things. We need also validation from JFrog. Right. Yeah. To review before adding to the issue. So I'm keeping this one on current milestone. Do you have other topic while I'm looking for the triage issue? I do not and I think other than the the complication of the bandwidth reduction project, that's the that's the big one and it's going to be a lot more work. Okay. I don't remember which one we discussed on Monday when reviewing the cleanup of the public. It was IPv6. You're lucky. A user did it for you. Victory. Is it bad or is it good? I don't know. It's good for you. It's more for us but yeah. A good point about the Windows 22. I saw a question on different locations, public and privates about the Windows 22 support for the images. So everything is ready. We are just waiting for someone to spend time updating the existing pull request on the agents, Docker agents images to start choosing the new Windows 2022 images. We need someone to spend time on that. That's what is missing. But we have both Windows 2019 and 2022 available with Docker, CE and everything ready. They have the same content so they can be used and we provide the labels and they are documented. So that should be just a matter of spending time on shaping the pipeline. So anyone interested is welcome. I haven't had that time and I don't think I will for now but if it helps users they can contribute there. Let me add this one. New issues. I can't remember if we already had one. Was that, but I didn't find it. So we had one and we had an issue because for the Jenkins Infra it's closed because we did our work and the agents are available and documented. So now someone has to use them which is out of the scope of the Jenkins Infra. But that's a good question to raise because since no one worked in this one, ready to start working. But nothing to do on IFRA except support. So those are the pull requests from the user from March. Exactly but the pull request was done before we provided the images as far as I remember unless I missed another one. But yeah if there are infrastructure issues of course we can open an help desk and support them of course. Oh no it was not from March. It was from March last year okay. Thanks for checking and sharing that concern. Oh Stefan just in time. Can you hear us? Can you hear me? Yes. Okay so it's okay can you describe us the status of the IRM 64 migration of some of the services on the public cluster? Yes I did add two lines in the what not moving them in. I have some interrogation about let me check SIRT manager. There is three as you can see SIRT manager cane ejector and webhook. I think there are IRM compatible but I would like to double check with anyone. I think we can start kind of easily with the migration of report because it's just engineering and it's already compatible with IRM 64. But in fact the first step is to lower the level of the Javadoc which is already IRM 64 for it to be able to spawn on non IRM to change the setting of the nut pool because we need to change the taint if I remember correctly the word and then go back on the migration again. Okay thanks for the reminder. True this looks good so is it okay for you to be your mission before the holidays or the two days that are left? If you give me a few minutes of your time to kind of finish not really finish but stop the the parallelism of Packer image for my brain to be ready to move on that task that will be awesome. Yep but that's just for my brain. Yep we'll do a bit today later and a bit tomorrow then. Perfect. Sorry to have missed all the time. There is a recording and notes so no worry. Thanks for going back. I think that's all for me we don't have any more triage issue. I don't see messages from you about new topic do you have other topic to bring or or we can close. I think we can close one two three we can close. So for everyone watching the recording see you next week. Nope.