 Hello everyone, welcome to the ChenKins Weekly meeting for the infrastructure team, SIG team. So first of all, happy new year. That's the, at least for me, I know that some of you already saw each other last week, but that's the first time. So I'm happy to start a new year and I wish the best for the upcoming month to everyone. So we are the 10th of January, 2023. Today, around the table, we have myself, Damien DuPortal, Hervé LeMeur, Mark White, Stephen Merle, Runeau Veraton, and Kevin Martins. We are six. Anything to say before we get started? Nope, okay. Let's start with the announcements. First of all, the release 2.386, if I'm correct. I haven't checked the status of the weekly release. It's done, but we're waiting for the docker image. Oh, cool. Okay, so it's done, packages, next step, docker image. Next step, docker image. And last, check these items. Anything else about the weekly release? It includes an important change to, but you don't have to put anything in the notes. There's a valuable change in there for users running the WMI Windows agent plugin. They'll now be able to uninstall it without having to remove their ancient, other ancient plugins. Nice one. Thanks for that info and for the job, the hidden job behind that, Mark. Do you have other announcements? None from me. None there. Just a big thank, a big thanks to Stephen and Mark for running the weekly meeting last week, by the way. I'll take care of publishing notes and recording with this one later today or tomorrow. Let's have a look on the upcoming calendar. So next week, we will have next weekly, like every week. That will be the 2.3, 387. It will be Tuesday, 17th of January. Important note to everyone, tomorrow is LTS release day. So the Jenkins LTS version 2.375.2 will be released and published tomorrow. I understand that Kristen is the release lead at the time. Our infrastructure on-call duty persons to help Chris during the release will be Erwelemer and Stefan Merle. Both of them will be there to support Chris if everything happened. We should not have an issue, but we never know. So I assume it will start early in the day assuming that Chris is still based in Asia. Correct. And Alex Brandes has also agreed to assist. Alex actually has permissions to launch the build. So he confirmed in governance meeting on Monday that he's willing to launch the build as well. So that should make it simple for the infrateam. If Alex launches the build, he tends to launch them relatively early in the day. And by all means, check with him to see if he needs help or if by nine or 10 your time you have not seen a build start, it's worth asking because you're right, Kristen is in Asia. So his time zone is quite different from mine, for instance. Cool. Good to know. So I won't be really available on my site tomorrow except asynchronously. So that would be a nice test for everyone, but any case call me and I should be able to park somewhere and open the computer. Next security release announced on the mailing list. I've had the link to his browsing in the future. And the two next major events that I know are will be the first them, four and five February in Brasov and Devox France, 2013 and 14 April of 2023 in Paris. Well, yes, 12 to 14. Thanks. I know that at least one, eventually two person on that room will submit bird of feather, which are kind of informal discussion on a topic that's last one hour usually, that are end of day, I think the 13 or 14. It's on Thursday, yeah, Thursday night. So it's a kind of gathering of people asking person. It's not, let's say a formal presentation. It's more round table. One will be around mobile. Is that correct Bruno? Yeah, mobile and Jenkins. The other one will be about building for embedded hardware and or building on exotic hardware with Jenkins. Nice. And the RV is walking on the Jenkins, a simple Jenkins bird of feather. So that will be a discussion for any panel or any people interested. I will assist the survey on submitting that topic, but are no energy confirmed that we can submit one. Any major event or calendar elements that you have in mind folks? No, okay, let's proceed. A few issues done during the past week. Nice job on this one, because especially we were still in holidays or coming back. First of all, general availability of Windows Server 2022. So thanks Stefan for the work involved here. So that's, that allows developers on CI Jenkins IO to use a new label Windows dash 2022 if they seek to build on a machine or using specific containers tested manually by Stefan. So we have been the person who requested such an image and I think it's Jenkins controller Docker image. So I've been them if they want to continue or we'll take care of any maintainer, but on the infrastructure side, it's closed. We are going to improve the support of Windows 2022 with the next packer image version, but now we have some available on CI Jenkins. I also, we consider that issue close. Any question? We will add Agents on AWS too, but now it's only on Azure, yeah. Thanks for the detail, good point. Duplicate profile, I don't know which issue it was, so. Okay, so thanks, Erwe and Mark, I remember. So someone had an issue with their accounts. So thanks for solving that issue. Is there anything, something to say on this one, Mark or Erwe? I believe we have one more step or at least I haven't deleted the account, the second account the user created. And I think that would probably be a healthy thing to do unless someone objects to that. You're right. So we can't combine the profiles, but as far as I can tell, the user's original profile had two issues, one or two issues reported in issues.jankens.io, the new profile seemed to have no activity on issues.jankens.io. So I don't think by deleting the new account, we would lose any content for the user. And they are not a plugin maintainer, so they're not uploading anything to repo.jankens.io.org. Okay, are you at ease with doing that operation, Mark? I am, if others don't mind, I'll just do it right now. Okay, so I'm adding it to the next milestone and we will close then. You should do it, no, sorry. Let's keep it on current milestone, it should be closed. Can I let you close that issue once it's finished? Yes. Cool, thanks. Okay, next one, renew DigitalOcean Personal Access Token. So thanks everyone, Stefan, for managing this one. So to summarize, we have a technical Personal Access Token, a technical account related on DigitalOcean because they don't have the concept of an app like a GitHub app. So since we need to generate this token, we need a technical account which doesn't have access to billing or administrative areas. And that token is renewed every 19 days. So we had to renew it. It expired last Monday. We had a learning system that went very well, so we were able to renew it. It was documented on the issue. The only problem we are having here is that I'm the only one able to log into that account because of the MFA, because it's on my app and we cannot add multiple MFA systems like we would on GitHub. So I'm not really sure which road to take from there, removing the MFA or eventually ask DigitalOcean if we could use email-based token, a bit like what Zoom is doing. So each time someone logs, there is a numeric code sent to the email. So that will be an MFA. I wasn't able, so if it's okay for you, I will ask that to the DigitalOcean support explaining our use case because maybe they have something for that problem to solve that problem. But right now, I've added the password of the account on one password as a good point, a good request from Hervé, but I'm still the only one having the MFA on that board. I've documented the fact that I'm only one on this shoe and on one password as well. Any question? Archive the GERA component, so I assume someone with GERA admin access was able to archive this component. Yes, team did it, thanks team. Okay, so we are off for Hervé. Are you the person who added the close as not planned? Yes, I've decided to get the action to generate the implementing notes and if I did this category as it's issue, we can just list and go on. Yep, yeah, good to see if we don't have false negative there. Good point, thanks. Okay, so account issue as usual. Okay, I'm going to switch to the work in progress for each issue we have to decide if we are able to work on it on the next iteration or if we should move back to backlog. Don't forget that we will have new items. First one, account.genkins.ioadminaccess for Stefan Merle. Thanks for opening the issue, Stefan. I haven't checked yet, but the goal is to track the fact that if we need to add you to the genkins admin and grant you way more powers, that it has to be validated by a collateral discussion. So I will take care of this one. I just want to check how can we administrate both key clock and account.genkins.io? It sounds like it's not the same mechanism. So interesting to check. In any case, I'm okay for giving you these accesses. It sounds like Mark and Tim validated, so I will ask a bit more members, but yeah, that's the idea. Yes, I think it's, to me at least it seems obvious that if you can access key clock, you clearly should have enough permissions to also be an account administrator on accounts.genkins.io. Key clock is behind the firewall, right? If I remember right, or it's inside the VPN and therefore all sorts of privileged. Yes, thing is, if my memory is not betraying me, that's the opposite problem. In order to be recognized as administrator on accounts.genkins.io, you must be on some specific LDAP groups. And these groups give you other powers. Okay, so your concern is, that's very sensible. And I don't have any concern about Stefan being given these powers. I just want that we audit these powers and it's clear to everyone which kind of powers it gives to Stefan. And if everyone acknowledged, including Stefan, once we know, then we can add him. I might be wrong. That's great. I've took a look at the Runbook country for accounts to check the group. And I saw the table listing all of them and some notes. We might want to do something about it since a lot of them are, I'm not sure. I don't know, not sure. So yeah. Okay, so I will look at the Runbooks and check the pattern. Cool. Any question? No, we're good. Okay. Next issue. All the inbound agent publish as latest. So yeah, we still have that recurring issue on trusted CIG and Kinsayo where some of the jobs but not all are not behaving as expected. That's a word bug that we can't understand. The jobs are configured to ignore tags that are older than three or six days. And it builds tags that are two years old. And these ones builds, these tags keeps overriding the latest tag and the images on the Docker hub. I took the time earlier today to explain the battle plan I got in mind. The idea right now. So that problem was fixed by the new release we did yesterday of the images. You have a new release, you build a new tag and latest is again overridden. I would say most of the time someone using the latest tag is really at EDH. So you, let's say friends don't let friend you latest. However, it can still be, it's still a problem. So my proposal is to first archive the current jobs that we have on trusted CIG, back them up directly on the file system of the virtual machine and have an archive somewhere like in the big archive bucket encrypted port that they created last month. Then we disable, meaning we delete the job once backed up, now we check the backup and create again free multi-branch jobs instead of a GitHub organization scanning with a list of free jobs. I mean, free configuration is acceptable even manually managed. The goal is to be sure that we start with a clean state and to demonstrate that we should see during the initial scan that only the tags released yesterday will be rebuilt and overriding on the Docker hub. That will be the demonstration that the issue was something because this is an old job with a bunch of config XML that were upgraded during years. I think it has been created in 2015. So let's say God's feeling that something in the XML might be wrong or a word bug with the GitHub organization scanning. So the goal is to have less moving pieces and we control each of these free jobs. The backup is if anything goes wrong we don't lose the build history. That's really important because this is deep audit log for deployment of images. We need that build history anyway. Finally, on long term, so we have an issue that I will talk about later but we should move these jobs from trusted CI to release CI. At least the job mentioned on 2845 which build Jenkins official Docker controller image. We have to discuss if the free jobs mentioned on that issue for the Docker agent images should be moved on release CI as well. Are there official release Jenkins assets? Are there something else? We have that discussion that should happen. The benefits of doing that is that it will enable us to manage as code the job configuration like we do on infrasciate. But that's long term. Is there anyone volunteering to do that with me assisting or no one is interested and I take it, I don't mind. I am to work a duck and see how you handle that. If you agree. Yep. Okay. There isn't any voice against considering that as a really important, not emergency but really important. So we should do it during next milestone. Yeah, I certainly find it very distracting to have the latest image go back in time. I agree with you friends don't let friends use latest but there are use cases where latest sometimes helps and we don't want it broken in any case. We do use the latest. Not for the agent, for the controller. So sometimes later is good. We are using LTS. For the controllers, not for, oh, sorry, yes. That's the latest pattern but it's not exactly the latest tag. Well, and I hope even there we're calling out an explicit version number but if we aren't we will be in the future. Not on CIG and Kinsayo because a security advisory process requires to be quick and right now we are stuck with the latest. There is an issue upon about that. All right. Okay, next issue is CIG and Kinsayo move ACI remaining workload to Kubernetes. I'm not sure why it is. Yeah, it was an old issue without any milestones. So I figured we add on the current one. Okay, so the goal of the scope of that issue will be to create a new node pool on at least one of the Kubernetes cluster add that's not pool or new cluster with Windows agents on CIG and Kinsayo migrates the ACI template to pod templates there and finally remove the ACI usage. I propose to put that back to the backlog or remove milestone. I don't see a reason to do it right now but please challenge me. No problem for putting in the next or removing milestone. Yeah. Is that okay for you to put it on backlog? Yes. Given your intent was to not lose that issue somewhere on the black hole. But somehow this is a new black hole. Thanks, Ervé. Exclude non-numeric plugin version from updates center. Oh, no. So I wonder, okay, so it's a good thing that that issue was opened on the L desk because it's tracked properly. I'm not sure the infra team should be responsible for that part unless no one at all takes care but I don't feel like that I have the skills for handling that one, right? I'm wondering if these are recent release or old one staying in the existing list. My question is not what is the problem. My question is that a problem that we should sort of, yep. I'm not really sure. I know that Daniel Beck will accept pull requests to update center to the repository and that seems like the place where this would have to be made. I don't know that we're the ones who have to make it. I think that was what you were asking Damian, right? Because this seems like it's a change in a pattern that Daniel may just simply reject and say, no, I'm sorry, we are allowed to have whatever version string you wish and reject, go ahead. What do you think about moving that issue to the update center repository itself to help scoping it and to put it under Daniel's radar? Yeah, but if Daniel rejected, it will be an issue for the plugin sites. So either we move this issue from repo to repo or either we ping Daniel on it explaining why. Update center two doesn't have GitHub issues enabled and I'm pretty sure Daniel doesn't, if he wanted them enabled, he would enable them. So I don't think we can, we just need to talk with him about it. I think more than move the help desk issue. And if Daniel says no, then it may be that Gavin Mogan wants to consider putting something on the plugin site to remove these odd version numbers. Yeah, a release version in is not a particularly interesting version, right? Nor is a release version, why? Both of you are making good points. Erwe or Marc, can one of you take the lead on this one and ping both Daniel and Gavin? Or are you looking to... Sure, let me start the comments in that help desk. Take it that way because it's really not something that as an infra team we should fix. It's rather we're just coordinating a conversation between two people that groups that might fix it. Yep, as stated by Erwe, we are still having the umbrella of the help desk and multi-repository. So somewhere we still have a foot on that area and a voice on the mother. So let's keep it then. Both of you are making good arguments at that point. Thanks Marc. And that was 3-3-1-7. Thank you, okay, got it. I've assigned to you, Marc. If you don't have the time, we will take care of taking this one next week. It's still a bitter leave on my brain being back from early days to fully process the intent of this one. So that's why I'm asking for other leads. So I've added that to the next milestone, thanks. Blocked from creating a new account in the anti-spam system. I think this one is waiting for an answer from the user. Oh, no. No, I didn't find it is mailed in the admin account. So that makes sense that is email, there is no account associated because if the anti-spam system is triggered, I'm not sure. There are code paths where the account is created and somewhere it is not. Both code paths lead to that message. So we have to check the logs of the application. It was last week, so I guess it might be lost already. So we should retry creating a new account in this name and while watching the accounts, the Jenkins IOLogs to see if there is any logs that could help. I don't mind taking that one unless someone is interested. Okay, it's the next milestone. Sorry, next issue, recuperar la contraseña. This is hablo espanol. And you sound actually quite authentic. So that one is, that's the one waiting for a feedback from the user, is that correct? Correct, there was never a response to my email. And so without a response to the email there's no evidence that the person who submitted this is in fact in control of that account. So I think it would be very dangerous for us to act on this request when we haven't confirmed the one path we know they must have control of the email, they don't. Yep. Is there any voice against, since it was one week ago, is there any voice against closing as not planned with a message to the user? We keep it closed unless you will answer to the email. I think that's good. Okay, that's discussed during weekly meeting, closing as no answer from the user. Feel free to reopen by answering to the mention. Okay, so that one moves to closed as not planned. Am I understanding it correctly, Arvix? Yep. Next issue, that's listed since before Christmas. We had changes on Terraform AWS EKS module. We had to recreate cluster. That issue tracks all the changes that had to be done to recreate the ACP instance at repo AWS Jenkins.io. Last step is that the Let's Encrypt Certificate is not generated because Let's Encrypt received an authentication error when trying to retrieve the HTTP challenge. I'm not sure why it worked before during the first attempt of survey. My guess is that there is something weird on the NGENIX ingress. So first solution will be to switch to DNS challenge that should be easier. There are issues on that area. Security, let's say concerns, not issues. But yeah, we have to put that back. I propose that one anyone interested can work on it. It's not a priority because right now we have to finish the private network and migrate everything unless someone object. The reason is because the only service which it's concerned, repo AWS Jenkins.io, we are not using it right now because of a wall chain of events where the root cause is the Azure roots network. So I propose to move this one to the backlog and anyone interested can look at the ingress and see what is happening. Any voice against that or anyone interested to force on that or something I could have forgotten? Nope, okay. Next one, Netlify site for Jenkins.io components. That issue was opened by giving right before Christmas, sorry, Gavine, we weren't able to handle this one. That's a request to track a Netlify site. There are some additional steps to be done. Is there anyone interested to assist Gavine on that part or should I take it one, two, three? No volunteer? Sorry, I'll take it, yeah, I can take it, but yeah, I'll take it, I'll ask you questions. Okay, yes, I can assist if needed. Okay, Netlify and hit up. No, I haven't removed triage and it's for the next milestone. Cool, thanks everybody. Some Jenkins mirror are using the wrong media type. I didn't have time to use this one to make this one. It's a Poupette change on Apache configuration. That one is not emergency. Anyone interested can help, but it's not a priority. I propose to move it back to the backlog and anyone including myself can work on it later. Okay, for you. The reason why I'm putting is that there might be a conflict with the Poupette Apache module. We haven't updated since at least one trimester because it's breaking tests and elements. So we should be careful on updating this one because that one could expose sensitive areas on the virtual machines, especially pkg.genkins.io. We don't want these services to be broken. Okay, mirror stat, wrong results. Same, I'm not sure why it's on the current milestone. Okay, we keep moving it from milestone to milestone. The goal is to update the status of some mirror results. Okay. Erwe, is it okay if I take it it's only documentation? I move it to next milestone. Actually, I don't think it's, I had assumed it was more than documentation because the status page itself reports servers we don't expect, but on the other side, I don't know that this is valuable enough for us to spend any time on it. We've got other things that are much higher priority if we were to just drop it and say, we're not going to do it for now. I don't feel any guilt at that at all. I think it's perfectly okay to say, we're just going to accept this is reality and we're not going to put it in our current work. We've got other things that are higher priority. Makes sense. Yeah, I would have tried to enable them with the line command. It's just for line command. Even if not going anywhere, doing not anything more, but just set to see if there are errors or not. Yeah, well, and we know that fdp.belnet.be is offline and we know that Cerberion is offline. Those two we know are offline. Alleyoon.com is actually online and they had wanted to engage with us, but we didn't have, we didn't at the time and then they stopped responding. So, I mean, no objections if we do it. For me, it just was, it's not as urgent, it's not as pressing as some of the other things on our list. Okay. The Cerberion one is worrying me. I don't try to experiment on this one because the four last time we tried to put it back online, we ended up with issues from users and quite some issues where the people were trying to download files and it was breaking their downloads, especially with an LTS coming tomorrow. I will advise against playing around with the mirror during one or two days around the LTS. I like that kind of conservative approach. Thank you. But I have no problem to plan it for next iteration then if it's okay for you, I've... You can wait. Okay. Proposal, counter proposal. We contact the person in charge of the four mirrors. So first, we dig on the history associated to the four mirrors that we can get on the mailing list and the issues. We check the contacts if we have some from the runbooks and we contact the administrator to ask them if they are still willing to provide these mirrors, especially the BellNet and Cerberion one. So they've already responded to us and said, no, they're not. Oh, BellNet? Yes, when you look in the history, I think you'll find both of them have said, no, we're no longer able to host. I know Cerberion was. And I thought that the others had said, no, they aren't willing to host and our status page is incorrectly showing unwilling participants in the list, but not contributing. Okay, so that means we just have to double check. So to link, to add any link on the issue that points to the story, that will be an argument to remove these two from the list. Is that okay for you or for me to take this one? Okay. Don't forget what I said about the LTS because if you remember correctly, last time we cleaned up and removed the mirror, there was a few minutes of unavailability and HTTP50 errors. So removing this has to be planned and announced. It's not a problem that there is error, but we need to announce it. Is that okay for you to keep the next track, ARV? I can assist if you are not feeling to do it alone. No problem. AKS, ad cluster, private gates. So right before Christmas, ARV successfully finished spinning a private gates, deploying an empty shell for future infra-CI, if I'm correct. So three weeks ago, I put quickly that set of task list. So now it has to be worked on. Are you okay to take that as main issue? Or are you bored with that topic and you want to start monitoring Stefan on the public gates? No, it's... You shouldn't be able to finish this. Okay. Both of your choices are okay, again. You have the right to be bored by long running tasks. About long running tasks, realign, repo, Jenkins, CI, org mission, I have to put my hands again on that topic. Last news where the repository repo Jenkins CI was merged, migrated successfully in December. It's faster than it was before. So thanks a lot for GFrog. We need to do a post-migration meeting with GFrog, but due to personal scaling item, it has been moved to next week. Sorry for that. I was able to confirm that the issue I had when setting up repositories on the UI, it looked like it was gone. I have to confirm that, but I tried to change repositories and it was instantly taken in account. So that is my task and I have to go back to document it with your help, Mark. I think we need to set up a few purring session, both of us, to continue the work where we left it middle of December. Yes. Is there any question on that topic? Nope, okay. So I guess one point, I did ask for the report of bandwidth utilization and GFrog support has acknowledged that they have received the request and they are working on the request. They've told us they can only give us data since December 18th and my answer back was that's perfectly fine. In fact, that's preferred. We would like to know what the current usage is and see if we can identify any guilty parties by IP address or by domain name that we could ban and save bandwidth quickly. We understand that won't be the long-term solution, but if there are easy ones, we want to know about. Absolutely. The plan here for me on my side is to validate some assumption around how the plugins are built. The goal will be to enable authentication. One of the main infrastructure tasks that I have to work on right now, along with planning and describing tests, is to find if there is a technical solution to have LDAP replication. So anytime we start the LDAP, it's available. Otherwise, developer will be impacted with our monolithic LDAP as for today. iPhone solution that are working in the context of using virtual machines and huge setups, I have to validate that on Kubernetes. Kubernetes will help as a framework for availability, but I have to dig more on that topic. Anyone interested can try on their own. We'll run experiment on a local Kubernetes cluster before and I will then do a duplicate copy to test during some times before going further. Time for the new items. We had a bunch of issues that were open recently. One from Yaroslav, from the security team. Due to HSTS, they are having issues with Chrome and not on Firefox. So it sounds like there are subtle difference in the way HSTS is handled by web browsers for trusted CI and third CI. They are right now, these two controllers are providing self-signed certificate. Most of us ignore and accept the exception. Due to HSTS on Chrome, it's not possible. As pointed by team, there is already an LDAP issue open for that. That was about a valid certificate for trusted CI agent can save. Why don't we have that right now? Because Let's Encrypt by default on this virtual machine is configured for HTTP challenge, which require Let's Encrypt to be able to directly access the service while the service are hidden behind VPN or SSH2 nodes. And now Let's Encrypt certification. The good news is that Let's Encrypt on these setups with the third bot process, not Kubernetes is absolutely able to handle DNS validation, which means if we put the correct DNS API credentials, that's it, but should be able to validate. Worst case, we can also do it manually, but that will be to be renewed every three months, so quite the pain. So the goal will be to find a way to configure the Let's Encrypt module to have the correct credentials with limited accounts. As per team and they discovered a few months ago, we should be able to create Azure account, technical accounts that are restricted to only a DNS zone, so that should help avoiding issues. But I don't mind having full DNS manipulation from these machines, because I mean, if someone access this machine, if an unexpected person or process do it, these machines are accessing things that are way more sensitive than the DNS. So we could get started with a full account and then restrict access, but that will solve Jaroslav issues because the certificate presented by these domains will be valid and recognized by every browser. So is there anyone willing to work on that? I can assist or I can take it, I don't mind. We not, yeah. Will it be okay given your bandwidth because you have quite a big topic around private gates? I don't know. Is that okay if I try to set Stefan as volunteer on that one? I think we'll push him to this issue. Yeah, you can push me, but I will need a lot of your time on that. That's why I need to ask for it. Yes, push me, yeah. You're falling behind the cliff. Okay, I've added both issues. Maven 387, that one is perfectly fine for you, Stefan, to wonder. Let's see. Now I will have everything over the cliff. So Maven 387 is generally available. It has been merged on the main branch of our Packer images. So now, on the Windows container templates. So we have to now release the Packer images and let the update CLI pull request open and do their jobs. And once done, we have to communicate to developers that hey everyone, now Maven 387 is the default version. Just in case if something breaks somewhere, developer won't waste time thinking, oh, my pair build and is sending a word error message. And Maven 387 has been verified on Jenkins Core on the many of the plugins I use and I've been using it now for a week or more. So I think your plan is exactly right and it has a good chance for success. Of course, Stefan, I'm pushing you over the cliff, but you know that we are here to assist you. I can fly as you can see on my name. I'm looking at the new shoes. So account for a smell renewed. Oh, renewed the signer certificates. That one is quite important. So as I understand, Stefan and Mark. So during last weekly, the following message that say in March, the code signing certificate for Jenkins will be expired. Mark, I might need help on this one. Is that okay if we set up a session? Yes, you and I should work it. You and I should work it. And then we may actually have to engage Olivier Van Nel just because the last time he and I did it was two or three years ago. And anything we do that infrequently, I think there is a runbook entry for it. But even with the runbook entry, it's such old knowledge how to do it that let's pair together to do it. Absolutely. I don't think last time I was there already. I think you are the only one Mark was there. You are correct. You were not. That really was Olivier and me. And, and I think we involved close. Okay. Because if I remember correctly, code signing certificates are very, very special. Yep. I think because that requires specific access to the other votes specific access to a root certificate and specific access to the private key, which is on the private repository. I think all the Github organization admin of Jenkins Afra should be able to access the private repo and all the Azure admin should have. So I guess Stefan and Irving theoretically could have access to the information, but better to check first and follow up the runbook. Any question? I think that's all Irving and Stefan. We discussed yesterday and mentioned, eventually Irving monitoring Stefan to start the work on creating the new public network and then the new public cluster. I think public network is almost already done and then create the new Kubernetes cluster. Given all the tasks we just had, do you think you can start working on that next iteration or do you want to wait one iteration more? I think it can be done in parallel. I'm not quite sure of the time that the task I got right now will take. So we will see. We can still report for one week on next Tuesday. We will see if it's okay for you to have that staying in the milestones. Okay. I propose we don't add it to the milestone. No pressure. There are enough tasks back from early days. And if you have sometimes both of you, I let you meet, discuss and start planning. And so then you can start working if you feel like. I don't think it... I feel like it's too much on one milestone. So let's see what we have for the other tasks. I don't want to put pressure. That's something really sensitive. So I prefer we take time for planning. Hervé, if you feel like you have sometimes you can start planning and writing down the steps. You feel like the issue. Is that okay based on your experience on the previous cluster? What do we have? Orgin bone. We keep it up for pregnant scurrying. Good point. And that one very much. Adrienne needs it. Yeah. Hervé, should we take it both of us? And the first one. Good catch. Thanks. We have other planning for supported its documentation for the GDK and of life. We still have until end of month. New repo for Jenkins board, but that one is not on our area. No, and I had the topic. I missed bringing it to the board meeting on Monday. I mentioned it there, but only mentioned that I had made the mistake of not bringing it to the board. Sorry about that. So off the list. It's kept without milestone because it's. Yes. Of the usual planning. Okay. Plug in site is behind Jenkins release tweets. I'm not sure. Yeah, but I don't see what we can really do. I don't think it was. Okay. Let's keep it there. Okay. No new issues. So I think we are done here. Are there other issues or things you want to. To plan for the next milestone or can we get started on this one? Cool. Okay. So then. I will update and publish the note as usual. So thanks. See each other next week or during the week. I'm going to stop recording and then you will be able to speak freely. For everyone listening to us. See you next week. Bye bye. Bye bye.