 Started and here is the shared screen. Awesome, so welcome everybody. So the first topic to the agenda is regarding the Azure storage. Azure file storage update we had last week. So basically what happened is the service automatically recovered by itself last Thursday. I'm still waiting for information from Microsoft to know what happened. So basically what's happening now on our side is the service was relying on Azure file storage to distribute to files to generate checksum for the files. And now the Azure file storage is usable. We can mount it, we can use the file, we can do all the things that we need to do. So typically the official location is now running. We are still running on the phone right now because I'm waiting to know if the official location is reliable enough. So at the moment we are waiting for Microsoft to confirm or at least to tell us what was the issue. They ask us seven to ten days to investigate in order to understand what was wrong. So that's the current state, nothing moved there. Since it's working correctly, I would be interested to move to the regular traffic from the fallback situation to the official location. I'll keep the fallback situation in case something goes wrong again. But the problem is I would prefer to the regular traffic to our community cluster, which is more appropriate to handle the load than the signal machine that we are using as a fallback. So there is in the notes a dashboard where you can see that by using the fallback service, it increased the loads on updated Jenkins that I use. So that's why I would be interested to revert that change. Any question, Mark, of this? So the, you have a timeline. I would love for us to get back to using the official production scale rather than a fallback. Are you envisioning doing that over the next few days? We've got a good time right now. This is Thursday and Friday are US holidays. So the load will probably be lighter over those two national holidays here in the US, at least. Or is it you're just trying to find the time now? Is there a date? So I think it would be nice to do it on Thursday. I don't want to work on that. So we just released a new weekly release. So I would not change anything neither today or tomorrow. But Thursday sounds like a really good time. There is no risk, technically, because it's just a DNS record that we need to update. And that's it. So I can definitely switch to DNS on Thursday. So that's really elegant because that means that when new builds, new releases are published, they're even today referring to two separate names. Get.jankins.io and package.jankins.io. But right now the DNS records for those are the same. And what you'll do is you put them back to having two different IP addresses. Yes, exactly. OK. So get the Jenkins.io will be redirected to the Kubernetes cluster. While package.jankins.io will rely on the machine that we are running. I mean, that we are running, historically. And then we get the benefit that next week we will do a weekly release on Tuesday and the LTS on Wednesday. So by the next weekly release. And if there's a problem, we can flag it and say, oh, we have to do this or that in order to assure that the LTS arrives OK. Yes, exactly. OK, then Wednesday next week, Jenkins 2.263.1. Oh, speaking of which, I just need to remind myself, I'm going to put a separate action item here. A separate topic here. LTS, who's that? OK, back to you a little bit. Next topic is about monitoring the packages. So this is something that I've been talking for a while. The idea was to monitor the Maven repository. Reposit Jenkins.ci.org to know what is the latest Jenkins version that you can download. And so from that Maven repository, we have a way to know what's the latest LTS release and what's the latest weekly. And so the idea was to create a data dog, a custom check that fetched that information from the Maven repository, and then monitor get the Jenkins.io to know if we published if we publish the appropriate distribution packages. I'm happy to say that it's now working. So there is also put a dashboard here. So the idea is right now, because of the way we release weekly in LTS and so on, we first do a Maven release, first do a Maven release, which means that we update the remote Maven repository, and then we create the packages and we publish the packages. The idea is now that we know that there is a gap of around 20 minutes between the moment we publish the release and the moment we create the packages, the idea would be to reuse that gap to be sure that when we release, everything is ready. And so we don't have to wait 20 minutes to have everything. So your checks will not be harmed by the acceleration that you'll do of bringing those things closer together, they'll still be okay. What do you mean by my check will not be? Well, so today, latest package available here shows war, for instance. So when war package, I think there's a way I can change this view, isn't there? Because we should, we can actually see this thing in its time scale. Oh, here we go, change this. If I adjust this instead of, oh no, it's not adjustable. No. This is the public URL, so you cannot adjust this. So if you want to put that to a different time frame, I mean, feel free to ask. This is just the URL that we let other people to use, otherwise, personally, I'm directly looking at the dashboard inside the doc. Yeah, I will increase the size. But in this case, we just saw that today, because today we released the weekly, we saw a few hours ago that there was a gap for the weekly release in that dashboard. And I think you were right. It was a 15 or 15 to 30 minute gap between our availability and package, for instance, or MSI installer, the worst case one. I just want to clarify here that the war that we are monitoring is not the one from the Maven repository, it's the one from get the Jenkins.io. So there is a bitons, I wrote a bitons script that read the Maven repository data file to know what's the latest version. And then I do some queries on get the Jenkins.io. Based on suggestions from Garrett, I'm going to switch from get the Jenkins.io to archive the Jenkins.io. The reason to that is because get the Jenkins.io has stats about how many times we download a package and I'm not sure how that query is done on the service. So what I fear is I fear that we increase the number of times we download a specific version. So that's why I'm going to query directly how we know that we automatically update each time we do a release. Great. Excellent. Any questions on this? None from me. Thanks very much for implementing. So the next topic is regarding the Windows Docker image. If I understand correctly, the current situation is the Docker image is ready to be built and published. The problem is it's taking a significant time to build and publish the images which affect security releases. And so now Garrett is working to identify how to reduce the time of that process. Garrett? Yes. So that's a PR went up to remove what was felt to be like the unused Windows Docker images that were created and tested. So it's only currently doing Java 11 and hotspot. So it's very quick now. And then that Docker, the pull request there just adds an extra one. So even so, it's down from four images to two images. So it's significantly faster than it was. But I think that PR is good to go. Longer term, I think we want to work out how to better paralyze those Windows builds. We can improve those, definitely. And you're complaining that Docker image building is too slow. Yes, to wait an hour, it's during core security releases for it. And I think this is exactly trying to address that concern that Daniel's expressed, right? That hey, a one hour build for the total set of Docker images is too long. And so, Gareth, your reduction from four to two, because these were built in series, right? So it was building each image one after the other, not even running them in parallel on the same machine. It's actually the testing that takes the time, or it seems to be. The building is, I think that's relatively quick, but it's because it builds them all and then tests them all one after the other, and that's the bit that's taking the time. Daniel's issue. Daniel's issue is just so many builds, and he has to run them three times. And the three times are for three different. It's for current LTS, previous LTS, and weekly, I think. Oh, I see. OK, right, multiple, multiple executions of the scripts. Sure, weekly and LTS, even if it doesn't, we have to LTS minus one. That already doubles as just weekly and LTS. Yeah, normally it's about an hour during a security release for it. OK, all right. Those talking about maybe staging support would help if there could be a way of staging them into a different artifact or repo before handling that and his issue would go away. Could we start into that question? That's a really good one. And I'm not. Should be able to stage them in the Artifactory, I think. It might mean some script changes, probably, or at least some pipeline changes for security releases. Yeah, I thought today that the Docker image generation was distinct and separate from the release build process, that it just consumes a release and detects it on a clock. So I assume Daniel was launching those Docker image builds when he had the security war built, then he has to launch that interactively to reduce the time as much as he can. The problem is there's to wait for the war to be one public and two. He's doing it before he can publish the advisory. So they could build from a private war and publish to a private repo and then have a separate script for just promoting it. Yeah, I think that's what you're saying, the registry and publish. Now, Olivier, I'm assuming that your your concepts around, around what is staging haven't touched this yet. It was mostly about staging the war, the devian file, the RPM, the MSI. Did I understand correctly or do you also consider how to stage Docker images? So I never work on the current staging environments. So we already published a private war file for the private war. It's already available. The thing is right now we have different maven repositories for security releases. But I mean, what we would have to do here, we would have to create a private Docker registry. Yeah, and probably we would have to work on the script for doing that as well. So I think we are quite far from staging Docker images. Yeah, that was a good enhancement there if anyone wants to work on it. Are there anything else on the Windows Docker image? So we've got a pull request ready. It needs needs review by by people who are expert in that. It sounds like it's it sounds like it's really the question that I have is should we merge this PR before next week quickly and stable release? Or do we wait after those two releases? Good question. Particularly what I fear is with the OS of on Thursday and Friday. I fear that we may not have a lot of bandwidth. We did that before the next week, the next releases. Yeah. Gareth, your your your your opinion, would you be OK if this still if we delayed until, let's say next Thursday, so a week from a week from nine days from now before this was merged? Yeah, sure. There's no problem start with that. Now, and I guess one of the one of the gaps there is that I thought it was in November that Microsoft expired or now Microsoft's last security fix for 1809 happened. So I guess even if it's last security fix, we're not really exposed until the next security fix comes out from Microsoft. Sure, I can wait a week. Great. But worst case scenario, we can we can rebuild the Docker image if we have to. Right. OK, great. Thank you. On Docker image, there's a plugin installation manager, CLI for a quest that's open and would be good to get it merged. Oh, so this is for 2.102? Or is it even newer? At least that, yeah, I think it's that one. Great. So there was we had a conversation yesterday in the dock, Tim, there was some some issue or concern raised by Vlad Silverman asking if it's OK that we're using plug-in installation manager CLI everywhere in the documentation and continuing to use it even more. I'm assuming it is OK and that that's our strong preference. Does that make sense to you or are there places where we should not use it? No, definitely use it. The the install plug-in script was going to get prepared. I'm using it constantly for reproducing all of the tables to Dibs issues. Excellent. Thank you. That's good. That was that was my answer in the dock. I said yesterday you've confirmed that. Thanks very much. Yeah, I only need to review the documentation and possibly not the install plug-ins stuff soon. Yes, I think I think that plug-in installation manager went in. Oh, good. Let me check. Yeah, they can't see it. No, I don't get merged. Very good. OK, so then so then that means two dot two hundred sixty eight will have or already has. Yeah, well, yeah, we'll have it's good. Anything else on Windows talk? Sounds like we can move on. So this was the topic for me just to remind me, Olivier, I believe what I need to do is create and create a profile for stable dash two dot two sixty three. So we have the audio. Do you hear me? I we do. Yes, you're very audible. OK, perfect. Thanks. Everything is explained in the Jenkins and first actually is key to the story. So there is a bunch of instruction that is something that needs to be done. So I think we have to create a branch on the Jenkins and first release inside that specific branch. We have to create and we have to update a bunch of environment variables. And then we are ready to trigger the stable release. But it's better to the bunch of questions with me. And I let me take that action. I was the one who missed doing that for two dot two forty nine dot one. I would love to be able to cement that mentally by doing it. Is that OK with you if I do it? Yes. That's all that I had. It's coming. Oh, and. Change log and a pretty guy, a pretty guy drafts coming very soon. Sorry that I'm late. Canada is available. Perfect. It sounds like we are good for today's meeting. Any last thing that you want to bring here? I was going to ask one. We've thought about. Yeah, sorry. Yeah, and I do connect now that we moved over and using the clock more widely for what's the what's the plan? Currently, we're still in the same situation where we're reliant on elder and we lost. We lost. So I had to look. Yeah, continue. I was just going to say, we lost that part of the library. OK, so basically what now that you are is. I'm really in the Linux foundation. We have to to change the user directory. So right now we are doing the generic user directory and we have to configure the open it up one. So we could, yeah, man, in that case, we would still be using open it up. But we could directly use key clock as well. We just have to configure JIRA. Yeah, this is something that we should do in the coming weeks. I was I was playing with that yesterday and something that we should. OK, it'd be nice to use key clock because then we can turn on things like GitHub and Twitter and whatnot. So people don't have to sign up for accounts. Yeah, I totally agree with you. I totally agree with you. We just have to add one new question. Yeah, I agree. And I was playing with that yesterday. Now, and is that is that something that we can change? Or do we need to invoke the Linux foundation people to make that level of change to JIRA? So this is something that we can change. So we just we still have access to the JIRA and the situation to the admin accounts. And so we just have to do one additional configuration. So the first the first iteration we will be. We will have two connectors, one using the correct, the current one that we have at the moment and the second one that would be using the our hoots on our token. So we would in the back end, it would be exactly the same user because in both cases, our user are coming from the same as that. But the thing is once we once we are able to connect using key globe, we can remove the old the old connector. We just have to confirm with the Linux foundation that we won't overwrite any configuration on their side because I noticed that some. There were some configuration that were done directly in the Java properties. And so we may just have to double check with the next edition folks that we can simply do implement this change. But it should be OK. Yeah, that would be nice to do it. And I don't I don't know. I don't know if you already did the experiments team with configuring JIRA and an old token. I think I managed to get it working, but it was a bit difficult because I was running it in Docker and just getting the redirect URLs mapping properly. It was a bit of a pain. I think I managed to get it working. It was just now. I don't think I got all the networking. Yeah, I mean, I don't expect that to be a lot of work. It was just the logging was quite bad, I think, from what I mean. But I think I got in and found that we're talking to local host when it should have been talking to host dot dot internal. Was that sort of issue? It was all because I was running it in Docker. I'm sure I was fine to set up. There was just at least stuff I was having. Yeah, I didn't want to spend too much time on it before we regret the instance to the next foundation. But I had a look yesterday because I was investigating some user issues, people who had issues to connect. By the way, what we are talking about, did you figure out the email issues? No, not yet. OK, something really weird because you I had to send grids and send grids says that you that you received the email, but you were not the only person complaining about that. So I still have. I get some emails, but at least in my case, I'm receiving. Yeah, do you get notifications in the Jenkins project? Yes, I do. I get notifications and I'm not I'm not it. Then the problem is maybe elsewhere. So I'm getting the certificate. I'm not I'm not receiving notification from the Jenkins project because I'm not actively following that part. So maybe that's why. So maybe it's yeah, maybe there is about that configuration. I'm sending you something. So so Tim, did I get that correct? You get notification for info tickets, but not for Jenkins tickets. Yes, people want me to know for work. Strange. So this means that, you know, so then it means that's a configuration on Jenkins projects. States, which I'll like. Feature. Well, all right. So we are running out of time. So I propose to continue the discussion on our scene. Sure. Yeah. So it's info projects and I get notification subscription emails as well. So if I could figure the subscription, that's that works. But I don't get anyone pinging me or creating issues and assigning them to me or anything that I did not. I don't think I've been notified on anything for. So I'll I'll just check. Maybe I've got a JIRA configuration to say. OK. OK, so let's. Let's let's finish this meeting and then let's investigate for this issue later on. Thanks, everybody. Have a good day and see you in our scene. Bye bye. Bye.