 Hi everybody, thanks for watching this, I mean to participate to this meeting. The first topic in the agenda today is about Jenkins infrastructure Docker images. I'm not sure if it's Daniel or Damian, if it's Damian or Garrett who put that topic to the agenda, but basically the idea is to build a specific Docker, a Terraform Docker images that we could use inside the Jenkins infrastructure project. It sounds like it's Damian. I don't think there is anything important to say here. Basically the idea, this is something that I just discussed with Damian last week. The idea is if we need a specific Docker image, that's fine. We can publish it on the Jenkins infrastructure Docker hub organization, as long as we don't put credentials anyway, which is something that we should avoid. I don't think there is anything important here and since Damian is not there to elaborate the topic, I propose that we continue to the next topic. Something that I've been thinking about last week is what's the current status of Confluence. We are still hosting the wiki, even if I don't think anybody is still using the wiki content. What I would like to do is I would like to keep Apache service running in front of Confluence, but I would like to stop the container. I would lobby strongly against that, if only because I've got data that says that it's still very actively read by Google search engines and by human beings. Gavin Mogun created a metrics collector that extracts from the logs and I've been collecting it now for the last six or eight months. I can give you some data. I could get it together for either our next meeting or I can send it by email to show which pages are still being accessed, but we've got many pages that are being hit hundreds of times a day still. Basically, my fear is it's becoming a snowflake. Nobody is updating Confluence. It's a service that is not updated running in our infrastructure connected to our app service. The idea is if we are planning to maintain and to keep it inside our infrastructure, then we should find sometimes to correctly update it. The good thing is I don't think we have major limitation to update the service. We haven't worked on Confluence over the past year because we knew that we wanted to transfer the JIRA to the Linux Foundation. This is maybe something that we could also do if we know that we still rely on Confluence, but we don't want to maintain it anymore. But anyway, as a short-term solution, I think that we could just update the image and find sometimes to work on it. Is your concern with, because the service is read only to 99% or more of the users, I wasn't overly concerned about even doing an update. We've been actively moving content, but we did use hack events to do that, like Hacktoberfest and UI Meetup, but we've still got a lot of content that we need to move out of that thing. I guess we could conceivably extract it and make it just an Apache server serving static pages. I just don't know how to do that with a Confluence server without spending a lot of energy. The reason why I would be interested to either keep maintaining Confluence or start the service is because from time to time you have security vulnerabilities. So for instance, last year, beginning of September, there was one where it was possible to read the content of every files on the machine, including a file containing password and these kind of things. What I'm not aware of such vulnerabilities at the moment, I also know that if we don't update that machine on a regular basis, we are vulnerable to these kind of issues. So even if the service is now in read-only mode, I don't trust the service if we're not updating it on a regular basis, which we haven't for Confluence. So the thing is either we decide to keep maintaining it or we just stop the service. The reason why I was suggesting to stop the service is because I know that we still have many URLs that are targeting that endpoint, but then we redirect those requests to the plugin site, for instance. So what I was suggesting is we could just keep the Apache service as a front-end and we would just allow people to access Confluence, but let's say with an SSH gateway or whatever or from the VPN or whatever. So the idea was just to still provide the service, but we would stop having it available from the white, from the internet. So that's why I was suggesting. What's that? Okay. So it feels like what you're saying is we really do need a plan to get off of Confluence. Yep. And if we can't get off of Confluence, then we need a plan to know how we maintain it. But yeah, we can just keep running the current way. Right. Okay. So up to now, the primary value of that site, at least for the docs effort, for the doc sig effort has been that it includes documentation that exists nowhere else. And we've migrated some of that to Jenkins.io, but the migration process is non-trivial because much of it is outdated and we can't just vanilla copy it. Okay. That's why I put the Docker and Apache that you can remove from the notes. Okay. Got it. I just wanted to keep that in mind. Okay. You were saying? I was going to offer, would it be okay if we, just like we did the JIRA project, we do a Confluence migration project and let's bring a proposal, bring Hoist it to the community, discuss what it should be. Sure. Because I know we don't want to make it read write again. That is a time sink we just don't want a plan like the JIRA plan. Yeah. There are many things that we learn about using at the scale of the Jenkins project and we definitely don't want to have it readable from people. Is it okay if, since I share the doc sig, is it okay if I take that as my responsibility to propose a Confluence migration plan? Sure. Here I just want to bring a topic on the table so we have, we identify when it to be done. So feel free to discuss that there. Excellent. So next topic, unless someone has questions on this one, particularly, next topic is about the Azure storage issues we had almost a month ago. So what we did after the adage, we temporarily were running get a Jenkins.io on the different machines and what I did over the last weekend was to just redirect the DNS traffic to the Kubernetes cluster. So the Azure file storage issue has been solved. The only thing is right now we don't know. I mean, Azure support wasn't able to provide us the information about why we got that issue and how to prevent that issue to happen again in the future. Basically, they just explained us the root codes like we are limited to maximum 2000 open file at the same time. So we were probably affected by your file leaks somewhere. What I find strange is that service has been running since March last year. And I don't understand, I still don't understand what changed one month ago. But yeah, at least for a time being, we reverted the service. So now the traffic is running to the community cluster again. And we'll try to monitor this one. At least the good things from this outage is that we put in place few things in order to catch these kind of issues in the future. So we now monitor if we can download packages. So we have an alert that can be triggered if the latest package is not available for more than 30 minutes. That's one of the things that we put in place. We also deploy a status page that is the Jenkins.io. So the reason to that was we realized that during the outage, it was quite difficult to communicate about the ongoing outage. So this time now we have a status page where we would publish any information related to an outage or Windows maintenance, whatever. So right now it's contained most of the information. So feel free to look at it and provide feedback. Any question? No. Then let's talk about the next topic, which is the Jenkins.io. So this is the Git repository used to manage the communities cluster. We identified a few issues, especially specifically over the past week. The first one is we are having a hard time to identify when a release of an update is failing. This is displayed in the job, but we are still trying to identify how we can detect that. So a similar approach is for the puppet code. When puppet does an update on one of our machine, we receive a notification in our C saying that that machine was updated. Let's say the machine was updated. This is not something that we have right now. We have to connect on a specific service to fetch that information. So that's one of the issues. The second issue is about testing. Until today, we rely on testing any changes. This is something that does not work anymore because we have more and more people contributing to the Git repositories. So we are thinking about ways to test the different changes. So the same if you have any suggestions here and they are more than welcome. Right now, the first test that we put in place was to do some to run some LinkedIn tests on YAML files. This already got some errors, but obviously not all of them. But if you have suggestions here, that would be nice. And the third element that we have issues over the past few weeks is the way we deploy the Jenkins, the way we manage a cluster, we run basically a file from a Jenkins job every 30 minutes and we activated the multi-branch pipeline. So what we did was to run the same command from all the Git branches on that Git repositories. So we have some old branches that we still have to work on. And so basically, we were running, we are using a tool called Update to update every versions. And so we were updating based on old configuration. So based on hand V2 configuration. So that's why we saw many PRs that were open in the Git repository to instead of upgrading the version, it tried to upgrade to old hand chat version. This is something that we are working on. So basically, what we would like to do is to only build the master branch and build the pull request branch. So this is something that we are looking at right now. So the result of that is to not to carefully review PRs right now. If you need anything, that's the right time to report them and to not, just to pay attention basically. Any questions until here? So Olivier, I had a, I've been exploring Oracle Cloud and was wondering is it, would it be reasonable if I were to attempt to construct a prototype version of the Jenkins Kubernetes cluster on a completely different cloud? Is that safe that I could do that? Or is no, that's really not safe. And I should not even consider it. I was thinking just for an experiment, experimental basis, not a production thing. I'm just thinking, is it a safe thing to do? Or no, I should avoid doing that kind of thing even as a test. So basically what I fear specifically for the Oracle Cloud is we don't really have credit cards. So either we are having sponsoring or we need to ask money from the Linux Foundation. So we need a process with the Linux Foundation to pay. Yeah. And I'm in their trial period right now. So it's free. So, and it's got my card. So I don't mind that part. I don't mind the financial side of it. I'm not at all concerned about. I was more concerned about any threat to safety or reliability. If I were to attempt to run a, attempt to stand up my own cloud, my own Kubernetes cluster on a completely different cloud using our definitions. No, no. As long as you try to be sure that those are not, those are correctly configured. Maybe we can review the cluster together once it's in place before we deploy the resources. But otherwise, I don't have any major concern. My biggest concern at the moment is if we decide to switch to Oracle, what the bidding workflow would be. Yeah. And I'm not anywhere near ready to go further with that yet. I'm still just curious, will it even work? So yeah, I agree that this was more of a try and exploratory prototype with something that someone's offered, offered money that we can use freely for the, during this trial period and then throw it away. So I'm doing something very similar I'm doing at the moment. So I've taken what configure I could find from CA Jenkins.io and was it Tim or Alex's PR to convert it into a Kubernetes-based cluster? Yeah. I've had to like uplift some of that since that was originally created, but it seems to be deploying. I think the biggest issue I'm having at the moment is around secrets and trying not to obviously push, to take any secrets from their current system or pushing anything up. So, but it is allowing me to test out the just sort of JCASK and job to sell config to correctly apply all the traits that we need to run, to do that master branch and pull requests stuff. It's quite a good test bed at the moment. So if you want me to show you what I've done, Mark, I can do that at any point. Sure. I would be interested. Yeah, that would be great. So thanks, Gareth. Yes, I would love to love to have that discussion and maybe we, we record it and talk through the, talk through the topics. Excellent. Thanks. Back to you, Olivia. I think, thank you for excuse the side trip. The next topic is about ERC stable releases, but since Tim is not here, I think I will just briefly explain what I'm working on here and then wait for Tim to be there. So basically, I would like to build a release candidate that would be from release.ci. The limitation that I have right now is to use which Maven repository to use. So apparently I'm supposed to use the same snapshots Maven repository on repotting.ca.org. Something that that annoy me is it uses a slightly different version naming pattern. Also, I'm wondering if we should build and publish distribution packages for the ERC, the release candidate version. Right now, when you go to get the Jenkins we can see that we published those for the WAF file, but we don't publish those for the Debian Redats use. What I'm wondering is, is it because the release candidate was triggered by Oliver Gunza and he didn't have the correct credentials to sign the Redats and the Debian package or is there any other reason? My guess is just because of credentials, GPJK and code sending certificates, but we could easily build and publish them from the release.ci, but I'm just gathering some information at the moment. I created a georatica, so I'll probably post it on the Jenkins release on our channel. As far as I know, it is only because he did not have permissions, but I don't know if there's anything compelling about making the release candidates available in the packages because practically speaking, I don't know who's going to test, the number of people who actually test the release candidates is dismayingly small, based on feedback from the mailing list and they all use the WAF file. I see no harm in doing it, but I would not worry about doing it or make it a critical piece of the delivery, at least based on what I've seen of Oliver's responses to Oliver's past calls for testing of the release candidate. My only fear is I don't want to have a really big difference, a major different procedure to do a release candidate than we would have for a stable or a weekly, so if we can reuse the same process, that would be perfect. And in that case, then it sounds like it's much better to go with, yes, let's deliver the DAB, the RPM and the MSI. I assume we would ignore the Docker images because they're outside of the workflow, but that for me makes a perfect sense then. Yeah, absolutely. Okay, perfect. Then let's move to the last topic of today's agenda, which is Jenkins Release. So today we did another weekly release, everything went well. As again, we have a dashboard, the data dashboard that we can use to track that information. I'll put the link in the notes after the meeting, but basically what I'm monitoring there is I'm fetching from the maven repository, so the maven-metadata.xml. I'm looking at what's the latest version, let's say the version 2.271. And then based on that version, I try to download Debian Redats for Windows 5 from get.jenkins.io. And if I can download that version from get.jenkins.io, then it means that they are available. And because there is a gap between the moment we release to the maven repository and the moment we publish those articles packages, it's around 20 to 30 minutes. Right now, we have an alarm that is triggered only after 30 minutes. So if something goes wrong after 30 minutes, then we get notified about the latest release hasn't been pushed. And so if we look at the data dashboard, it's only one or zero. So most of the time, it should be one, except that if the latest version cannot be downloaded, then it's zero value. I don't know if you have the link. I put the link in Darcy, but I do. I'll paste it in later. Absolutely. I do. I don't want to disturb my windows lest I pop something on screen that's sensitive. Okay, perfect. Otherwise, I think we cover everything for today. Yeah, but the most important one, I propose to not have a.jenkins.io from meeting next week and the week after. So, yeah. Objections from anyone? I mean, it doesn't mean that nobody will be on the chat. It's just like we don't break major services or at least we try to avoid that. And we just respect that not everybody will be responsive over the next two weeks. So that's the thing. Then any other question? I'm going to come to three. One, two, three. Thanks for your time and see you on RC. Bye-bye.