 I just started the recording. So, hi. It's so bad here. There. How's that? Better? Yep, better. So, it's recording now. So, hi everybody. Just a few things before we start. I have several announcements. So, I will switch. So, I announced the maintenance window next Monday on the meeting list. So, basically, it will affect, adapt the Jenkins.io, the account app, and the telemetry service. So, normally everything should be fine, but who knows what happened when we test the services. The second thing is, almost everybody complete the poll regarding the meeting time slots. I think I'm only, Andy Oleg is missing there. So, I... I voted. Yeah, just one hour ago, so. Yeah. Okay. So, maybe... Yeah. So, now I ask you your results. I think we will just keep the Tuesday right now, but I put it in public so you can see the result of the other people, but we'll probably keep it on Tuesday. It seems to be easier. So, that's regarding the poll. I was just wondering if it would be a good idea to just do this meeting every two weeks instead of one week. I don't know if you have any input on this. So, for me, it helps meeting weekly. I'm not up to speed enough yet on infra, and I feel like last night's outage of SSL on issues that Jenck reminded me, I'm not watching monitoring systems. They're all sorts of things that could go surprised. So, we're too happy to keep it on a weekly basis. Yeah. I assume for you, Olivier, it's maybe 30 minutes of overhead, but for me, it's 30 minutes of keep myself intensely focused. Okay, perfect. So, then let's give it every week. That's fine. So, let's continue different topics. So, the main task that I'm working on right now is to migrate the whole cluster, community cluster to the new one. So, as I said, there is a maintenance window next week. And part of that migration, we also drop Evergreen.jnk.io. So, we will not migrate that service. So, that's the main thing that I'm working on next week. Otherwise, everything should be fine. The other big initiative that we work on with Timja and Alkay, which was to deploy the refactoring image with the plugins.jnk.io. So, we first try to deploy the service test in the morning, then we will discover some issues which were specific to their and classical projects. So, we just deploy a new service called beta.jnk.io, in order to test it there before we deploy plugins.jnk.io. So, I think everything is fine now. At least the IHOR that we found is not related to the refactoring work. So, after the meeting, I will probably just upgrade that service as well. That was the first, another initiative. And finally, now that I have access to the Azure account, which is third. Yes, sir. So, plugins.jnk.io will switch to a static generated site as the primary site shortly. Yes, that's a proposal. That's a goal. Okay. So, if you want to have a look, you can just go and beta the plugins.jnk.io. It looks great. It feels very responsive. How's the generation process? It's great. You only don't sign that I notice now, but it's not related to the static site. It seems like sometimes the documentation is not important in the front-end. So, you just see a link to the GitHub readme. But I think it's more related to a GitHub rate limit or something like that. But there is no way to log that. I mean, we don't have that information from the application. So, we should probably export that. There is a task somewhere at the GitHub read API limits to monitoring. So, if I understand correctly, when you switch to static plugin site, you put additional stress because each time you generate it, during the run, you do a lot of GitHub requests. So, that's one of the first things. But the second thing as well is that we have multiple containers running on the Kubernetes cluster. So, each time we query, for example, the org service, it goes to a different container that if you go to the new service. So, we have multiple containers running at the same time, and they all do the same query basically. So, right now, it's really not efficient. So, it would be interesting to find a way to just generate static requests once, like every... and to cache the results. And being sure that the cache result is available for everybody for all the containers. Okay. So, if you want to submit some patches to the plugin site, what is the current process? Just submitting pull requests towards the master branch so that it lands in beta? Or is it something different? So, now, right now, we don't have... So, the beta will be disabled once... So, I will not keep the beta feature of the beta website. Once we merge everything and the... I mean, once you upgrade the plugin to the .io, we won't have beta anymore because it's a manual procedure to deploy that at least for now, because we never update the process for that in place. So, you just work on the git repository, the plugin site API, and publish the current image, and then update the hand charts that deploy that current image. Okay. That's good. I will update the documentation right at the end. So... Yeah, if I understand correctly, my buy layout was also fixed by Gavin Morgan. So, I see some differences in layout. Yeah, mobile is usable now. Mobile is a lot better. Yeah. The look and feel for desktop view is a bit different, but I wouldn't say it's a blocker. We can always say... At least now you can search for the information on your buy and it's a really huge improvement because in the past, when you were looking for, let's say, Kubernetes plugin, you had that huge menu. It wasn't possible to access the plugin formation in the back. You just had to type letters that were going invisible. That's real quick improvement regarding the plugin site. Yeah. So, I'll submit a couple of bugs. What I noticed is that basically there is no link to the plugin search or landing from the plugin pages. But, yeah, it's a tricky thing to fix. I think that was intentional. Really? I think so. There was some discussion. Po-request has a lot of discussion, if you want to take a look. Yeah, so, yeah, I'll take a look. After that, it embeds, but in Jenkins, I will understand that. So, I'll take a look. Another topic that I also work on over the week was to grant access to the Azure account to multiple people. So, I granted Global Administrator to Oleg Alex Pearl as a board member. I only granted access to Oleg and Alex because they are involved in the infrastructure and it would be useful to have them there. I also granted access to Jessie because he was working on, at least it was investigating some issues with the incremental Azure function. But, at least for now, it's all. The main reason to that is we cannot put in place some fine-grained control on who has access to what. So, either you have access to everything, you have read access to everything, but you cannot say, okay, you only have access to that specific services for example, for now. So, there are two possibilities. Either we stick to the current workflow, so only a few people have access to the Azure accounts, or we decided to put in place some rules and then we have to increase the Azure Active Directory subscription to a more advanced one with more features. So, it means that we have to pay either $6 or $9 per month per user. So, obviously putting those rule in place and adding more people is also time-containing. So, at least for now, we'll just keep working like it is. But, it means that if there is anything wrong with any kind of the services running in Azure right now, we can now bring Alex our ally. Yeah. There are some runbooks about Asia. I have access to them, but I'm not sure about Alex Earl. Maybe it's something to discuss with him. So, for there are people who don't know about the runbook. So, we maintain some documentation for the people working on the infrastructure projects. Basically, the runbooks was put in place to solve any issues, like something reported by Pedro Dutu, whatever. It was mainly focused on the perpet infrastructure. So, most of the things explain what to do when one of the services is down. But, we should definitely update that with all the new services that are now running on Kubernetes. That's, yeah. So, I would check if Alex has access to that, and then I will also check to see if we can just recreate some documentation and maybe update the documentation there. Uh-huh. So, if you are interested to be on call with Apache RQT, feel free to ask. And so, the main advantage if you are on calls, of course, we will see what are the kind of issues, but it means also that I will have to grant you different accesses. And so, like the runbooks, like the infrastructure and so on. So, there are two kind of accesses. Either you have access to the Azure accounts, or you also have access, like, on the machines themselves. So, let's say, for example, see at the Jenkins.io, it's all the YouTube association on the machine and see what's wrong with the Let's Encrypt client that will not generate a new certificate, for example. So, those are the kind of issues that we get based on Pedro Dutu alarms and we try to fix them when they occur. And if we cannot fix them properly, we just update the runbook to just say, okay, if it happens, just do this. Um, yeah. And last thing about the Pedro Dutu. So, something that we put in place with projectivity was to have a follow-up to some policy. So, only people, people are only on call during their, I mean, day-to-day work, like 9 to 5. Um, and so, we try to spread the load on the different time zones. But the main challenge that we have right now is most of the people who contributed in the help on the Jenkins.io project does not anymore have the time to work on. So, if you are interested to help those tasks, if you could ask and we can definitely organize a small meeting to onboard and explain what to do when something is wrong. So, in order to be inserted into the Pedro Dutu rotation, I just need to send you email or send it to the info list? Um, yeah, sure. Send just an email on the info list. So, in this case, there is no strict rules because if you are, if you are, basically if you are on Pedro Dutu and you have to deal with some issues, I also have to grant you privilege access like to admin machines and so on. So, there is nothing like, okay, if you contribute for three years, you have access to the infrastructure. So, I prefer to send an email on privately. And so, if someone wants to raise a concern about what they burn, you can say and otherwise we just grant access. I think it's better and more transparent. Great. Otherwise, that's pretty whole for me. So, I don't know if you have, we can do run tables if you want to speak about something specific. Is there any news on the certificate for the release process? So, you're speaking for the code selling certificate, you mean? Yes. Yes. So, I had a discussion, with Dan Lopez last week. And so, they are still in contact with between DigiSert and the CDF to move this forward. But nothing yet at the moment. So, they open a new issues. So, I can just share that with you. I share that with you. So, this is the issue. So, I just said that I just told Dan Lopez that if it does not move forward, I mean, if it does not work, maybe it's time to just look at a different provider. I mean, I have no visibility of what are the kind of issues. Just legal. So, I just, I don't know. I don't have much information there. But it's definitely working for the work on the automated release port. Any other questions? Otherwise, I think we can stop here and just go back to RLC. One, two, three. Thanks for your time, everybody. Thanks. Bye.