 Now and share my screen welcome everyone to the Jenkins infrastructure team meeting. Thanks for being here. This is being recorded reminder that we're governed by the Jenkins code of conduct. So be nice to each other topics I've got on the agenda for today include a new Jenkins infrastructure contributor. Latest status on Azure storage outage. GitHub publishing of the release war files from Tim if he's available. Weekly release status. LTS release prep. Windows Docker image status from Gareth, if you're okay with that Gareth and then Jira migration topics of any any other items that should be put on our agenda for today. All right, great. Then let's go ahead. I wanted to briefly some cup on GitHub releases but it needs team as well. Right. So, so maybe what we should do there. Oh, like, I think I hope Tim will join us. What if we were to shift this this one down later in the agenda for now. We'll go through it and hope that Tim arrives and we can have that conversation. Yeah. Okay, great. So first item we've got a new Jenkins infrastructure contributor Damian to portal Damian. Why don't you introduce yourself. Hello there. So Damian I'm I'm a former freelancer. I worked for cloud base in the past as training engineer as well. I'm a Jenkins user since almost a decade now. So happy to be part of the adventure with you folks. Super. Thank you Damian. Delighted to have you here. Damian knows lots about terraform. He came. He's been through traffic. He's been through all sorts of interesting cloud things. Looking forward is help. All right. Next topic was the Azure storage outage status. So we are currently we are still running the oops still running the workaround. It works. I get Jenkins.io is on the same host the same host as other services right now. That's that's not a long term thing for us, but it's operational. The plan is to plan to switch back To the the Azure file storage. Next week after Olivier's back from vacation and after we've finished the 2.263.1 LTS release. No answer from Microsoft yet on what went wrong. And why it's now working again. So they told us they needed seven to 10 days. We're now about seven or eight days into it. So there I think still within their window. But it's naturally worrisome that we don't know why it's working again and leaves leaves us nervous. Any questions there. So is it possible to create automatic fillover infrastructure. Since we have old one and we have the one issue we want to use. That's an interesting idea. Yeah, I don't know. I've that's that's a very good idea I proposed. Let's bring that question to next week session or maybe we bring it as as a mailing list topic because I think it's a good idea we we know that we'd switched and we know that the switch has been operational now for one or two weeks. So maybe we could have automatic failover if we had this problem again. So I'm not sure whether it works. I'm investment, but assuming that it's not a big deal to have it, maybe we should. Because yeah until we get clarification from Microsoft what exactly went wrong. Doesn't improve trust much. Main service. Right, right. Very good. I think that's an excellent question. Anything else on the Azure storage outage cost wise, what does the fillover infrastructure cost approximately because my understanding that it's more expensive for us, of course, I might be completely wrong. Hey, that's a good question. I thought that I thought that the that the current that the previous way of doing it, not the workaround was marginally more expensive because it's using Azure file storage, whereas the storage plus no mirrors and practice. No, we still have mirrors. Right, but still. Yeah, and I don't think to keep it in account, because I'm quite confident that we want to go beyond the budget this year in the budget for sponsorship by CDF. But in a longer term. Right, right. It's a valid point. I guess one question that we we should be able to ask even is look to see what is the cost. What is the cost impact of the current situation we've been this way for a week. If it has substantially reduce our costs. That would be an interesting data point. I'm not sure. I wouldn't rely on that too much because it does mean that we need to use check to be there and statistics are really estimate. Right. Okay. The cost download or something like that. Absolutely cost. Good point. Yeah, and I don't know. I don't know the bandwidth portion of that. That's a good, good, good observation. I think given class week. So many people go downloading releases. Right. Right. Anything else on the storage outage. Okay, next topic was weekly release 2.269 has been delivered. The build completed successfully packaged successfully. And a new addition from Tim Jack home the war the deb file the RPM file and the MSI were uploaded to GitHub. So they are now visible there as part of the release as part of the yeah the GitHub action base or the GitHub release. Here's a similar pattern was used in other projects like for the distribution manager, which in this program. It has some advantages. Because first it's backup download source. The file and first I should go down. Doesn't solve all cases but if you need to be able to use it. Right. Right. Yeah, and, and now one of the questions that was was asked about that I guess we'll get to that in our GitHub publishing release war files if Tim's available. There were some questions asked that I think are worth discussing. Okay. So in terms of that 2.269 release, the change dog has been merged is confirmed visible. I haven't yet confirmed the data dog checks are passing but I have confirmed that the exception tests installs for Debian and red hat are passing. So as far as I can tell the weekly release was successful and all parts of it ran as expected. The topic was LTS release prep. And there, we are tomorrow releasing the 2.263.1. And that will be a, if I recall correctly that is started interactively so launched interactively not on a clock. And we then watch to the release plan to confirm that it is correct. So, because, yeah, the release community was slightly delayed. We didn't get so much testing feedback. Yep. I'm not too concerned about shipping the release. Anyway, I want to test it tonight. Yeah. It's a good question. I was I just assumed that we would release as scheduled tomorrow but I didn't ask Oliver, if you wanted to change the release date. I was not on the mailing list. But yes, and so there was no response. I think we should just assume that we release tomorrow. Okay, all right. Anyway, the process is manual so we just need to ensure that someone clicks the build button. Right, right. My assumption given Olivier's current current status is that he's he's on holiday sort of and so I assume that Mark will that I will push the push the launch button early tomorrow my day. Yeah, it's fine. Somebody, your team started earlier so that by the time you wake up, you have a large difference before promotion. That would be fine as well if you're willing to look at the plan and be sure that it's no surprises you or Tim would be great. Yeah, just gives us a bit more time. Yeah, but the US time zone shouldn't be too much different. Okay, I read it to Michael and anyway. Okay, all right. So, so only do you want me to know you'll you'll go ahead and launch the build earlier in your day. Yeah. Okay, great. So I'll launch it that approximately. I am you to see. It's enough time for it to complete. Great. That's wonderful. The release profile is ready. Damian and I reviewed it today and merged it. And the change log and upgrade guide PR is in review but I believe is ready to merge. I would love to have. If you've got time Oleg one more check that I addressed all of your concerns. I marked most of them as hidden because I covered them there were one or two I left visible for for further thought from you. Thank you for all the comments. But yeah, we'll take a look. Great. Thanks. Well, actually, one of the important things was to get my protocol removal in the upgrade. And Daniel reviewed that. Daniel reviewed that and said he was okay with the way I had phrased that. Okay, that's great. I'm going to remove it's removal of some protocols is still have SSHD. Yeah, it was this it was your right it's module upgrade, which did a which deprecate which removes several SSH protocol. Well, no, it's the words where it removes an H Mac and a key exchange algorithm and key exchange algorithms. One of the reasons me about that is actually removal because there is a job submitted by JC about converting SSHD model to a plugin. And one of the consequences of that is by default when running the Jenkins and you'll have no SSH CLI mode. So in order to put SSH CLI mode, you will need to install a plugin. It's loaded as a detached plugin. My understanding that it will be still detached like in the very beginning, but eventually it might impact some use cases in configuration management tools which tried to communicate over CLI. I'm not sure what any of them uses SSH. Yeah, that's something we need to keep on our radars in the core team. But yeah, it shouldn't be a problem for now. Great. Anything else on LTS release prep for tomorrow. Okay. Windows Docker image Gareth. There's not much to update on this since last week and the PRs for Jenkins Docker builds are waiting to be merged, but I think we were going to hold off until after this LTS released had gone out. Great. That's that's about all I have to update on that one event. Okay, thank you. Hopefully later on this week. On this topic, we'd wanted to want to host with Tim, Tim here. Oh, like would you like to defer this till next week. Do you want to just go ahead and make some notes on it. What are what are you what are you, is it helpful without Tim here best that we wait till he's here. We can continue. Okay. All right. So yeah, I can summarize how it works, but it's basically just get up action which triggers on the release and publishes the release assets every time you cut the release on GitHub releases. So we have released draft which prepares the change log draft. And once you can cut the release using this change log draft. Another get up action, go and automatically find release needs and attach. Is that hinting that I made a mistake in publishing the release I went ahead and publish the GitHub release today, just as a matter of systemic was that okay like or should I have delayed. So, when you pop the release, you need to be sure that packaging stage has finished so that you have artifacts in place in our factory. You don't need to publish on. Well, you need publishing stage for that because it also publishes the installer artifacts. So once the entire pipeline is completed, then you can just cut the release and everything you'll have an automatic. So, what it means that we still have action item on the backlog of automatic change log publishing. Right now we have tools for that, but we haven't really finished that. And once we finish the stage basically you have the entire pipeline in place. To be honest, I'm not sure that we should be really getting up actions to plot artifacts in the final implementation. We should move it to Jenkins. Well, and that that makes sense to me because your note that I did assure that the packing state package packaging stage had completed but the GitHub action, if it were automated might not be able to assure that the packaging stage is completed because actually it's completely synchronous. Right. Since our release infrastructure is behind VPN, there is no practical way for GitHub actions to even access that. Yeah, in GitHub action we can go to maybe in repo and then we can check the existence of the software and we could publish them. We can do it on the other way around. If you use the on the Jenkins side, the old GitHub command line, you can do the draft publication remotely from Jenkins, as well as chaining and triggering so you can do everything synchronously on the Jenkins part while it's harder to do it on the other way around. And there's already a pipeline library which allows for release. I'm not sure how good is it. I have never tried it. But yeah, I think that since we have the Jenkins project, our pipeline should be on Jenkins. You should integrate. Once we have in communication there. Good. Thank you. Now there was some discussion about how to how to disable distribution of a release as far as I could tell the mailing list seemed to have been satisfied with that disabled process. Any objection any exceptions to that or places where people are worried. Oh, we need a way to hide a release sometime after it has been delivered. I haven't heard this discussion. Okay, all right. Okay. Damian had pointed out that there might be a concern about storage limits but did the research and GitHub. The limits don't look like they'll affect us at all. They won't let us do more than a two gig file. Not a problem. And no limit apparently on bandwidth. And as far as I can tell you that on number of files so I think we're fine. One question I guess it's probably best for Tim and Oleg for you, the two of you together. Could the techniques like this be used for plugins and would it be useful. Firstly, yes, it could. We have any plugins using that you have one plugin deliver the false like a plugin installation manager and it's well run. But again, if we think about the release process. We should try the release artifacts from Jenkins pipelines. Now we have continuous delivery infrastructure for some plugins. The recent Jeff by Daniel and Jason. And we can probably integrated there. But if you just want to have HPI file published on GitHub to change local cost you can just follow the pattern. Thanks. Okay. I had one more topic on JIRA migration that we had an issue with email notifications from the JIRA system. Tim reported it was resolved. But I've not seen indications in my experience that it's resolved so I'm, I are other people receiving notifications from JIRA that they might expect. notifications and subscriptions. I can say I'm receiving notifications but something is definitely working. Great. Okay. You're receiving subscriptions you said and posts. Great. Okay. All right. Because those are the things I'm not sure I'm seeing but I may I may just have a configuration error. Great. Thank you. Any other topic. Yeah, I just want to say that the email configuration jury is quite complicated. Because it's also a project configuration. Unless you go and check everything then you can be 100% sure. Any other topics we should discuss here today. All right, I think we can call the session done then I'll post the recording about an hour hour and a half from now. Thanks everybody. Thank you.