 Let's start this meeting. Hi, everybody. So welcome to this new Jenkins super meeting. Basically, the main topic will be to discuss about the automated Jenkins releases, how we can automate this. We were blocked because we were missing a co-signing certificate, which has been provided by the CDF last week. So we are now looking at what's all the next steps, basically. So we started a new document that contains all the nodes related to this. The document is not public to everyone, but it's more like, if you're interested to participate, just send a message and you will be added to the document. The main reason why I don't want to be public by default is because I want to be able to discuss about security or private stuff, whatever, related to this project. But once the automated release is shipped, that document can come back public. So if you want to participate, feel free to ask the permission. So I can just share the documents for now. Can you see my screen? Yes. Our release automation discussion. So the idea here is just to list all the missing steps that we need before going live. So the first thing I just did is a few reminders about the different repository that we have here. So the first one is Docker-packaging, which contains the Docker image used to run all the scripts. Indeed, the packaging scripts. That's a quite big image. The second repository is Jenkins-CIS-Packaging, which is the repository that contains all the scripts to build Debian Redats use and Windows package. We have Jenkins-infra-release that contain the batch script used to release, that contain the Jenkins file used to trigger job release, to trigger packaging, the pod templates definition that we use to run everything on Kubernetes. That's the biggest part here. Then we have Jenkins-infra-slash-short-secrets. This is a private repository that contain encrypted secrets used in the release environments. All the secrets are not defined there, which try to split the secrets in multiple locations. Some are in that repository, other are hosted on the Azure Key Vaults. And finally, we have the Jenkins-infra-slash-charge that contains the definition to deploy the release environments and all services related to the release environments. So if you have some time, if you are interested by a specific part of this project, feel free to just look at this one of those repository, verify, review those, be sure that everything is fine. And if you have any suggestions, feel free to document those suggestions here so we can discuss. The idea is we started listing a few to-dos, things that you have to do. Either on those or minor bugs that you have to fix. The main thing that you have to do now is that we were using certificates, like self-signed certificates, temporary GPGT, and stuff like this. And now we have to switch to the official one. So I updated the code-signing certificate to use the real one. I generate an official GPGT. And so now we are running the release process and the packaging process to be as close as possible as what it looks like once we are going into prediction. So now, depending on the outputs and the different people who will review this project, we will be able to go either at the end of the week or two weeks, whatever. We still have a lot of uncertainties before I go into prediction. Do you have any questions so far from here? Nope, so I can continue. So as I was saying, the idea here now is more to identify what's the first release it looks like. And so there is a file called release. So we have right now, so the release are defined in the directory called profile.d. And inside, we have three different releases right now. Right now, I'm focusing on experiment talent. But once the experimental release is validated, we can move forward. The file is the same for the two other releases. So the most important settings that we have here is the git repository that we are using to build a release. So right now, I'm using my fork. But we can do just some examples. The GPG key that we are using, the Moven repository, where we are pushing the artifacts. And finally, the release line. So the release line is the name of the release. So right now, we only have stable and weekly releases. So in this case, I'm just creating a release line that's called experimental. So we can officially push, for example, on package.genkizotai.io to validate that everything is working fine. So once we are ready to move to go into prediction, we just have to be sure that the user and we have the right settings defined here. And the user use for releasing have the right permission. But yeah, right now, we are still doing some tests. The release environment looks like. So the service is only available from the VPN. So right now, it only has two jobs. One to trigger release. That's again, one to trigger packaging. So the releasing job is now working. Where is this? So it's now working and it's using the official code-signing certificates and it's also using a real GPG key. So we can test artifacts. So if you want to have a look. So basically, this instance is only available from the VPN, but then it's public to everybody who is in the VPN. So right now, we decided to go as open as possible. If for some reason we realize that that instance can be at risk, we can put it in a more secure place. But for now, we want it to be as, we want to be sure that people can look at the build outputs. So if you want to trigger a new release, it's as simple as this. You just, where is this? You just run a build and there is a parameter. And the parameter will ask you which release you want to run. Is it experimental? Right now, only enable experimental to be sure that we do not conflict with the stable of the weekly, but once we're ready, we can just add those parameters for the stable and the weekly. And regarding the packaging job right now, I'm having some issues with the GPG key. So basically, I was using the default GPG key provided with the script. And now that I define a different GPG key, I'm having some issues that I'm currently fixing to be sure that we are using the right GPG key. But the job looked like this right now. So it's loading. So basically what are the different missing steps here? So as you can see, the builds are broken right now. So this issue is related to the GPG key. So we are using the wrong GPG key. The next step would be that we'll be sure that we'll be the next step. I will have to be sure that we are publishing to the right location. So something that I started working while we did not have the conserving certificate was to redesign a package of Jenkins.io. So we could split the different services into different location on Azure. I did not have the time to finish that work. So basically what I'm going to do is just change the publishing script. So we just copy all the file to a package of Jenkins.io and we can move forward. So I wasn't able to do all the work that I wanted to do before going live. So now we are just looking at different shortcuts and seeing what really need to be done now and what can be delayed for next steps. Is there any question regarding the automated release for now? Alec, we can't hear you. Sorry, no, it's fine. So we have some action items on our side to proceed. So Bosmy and Mark will be spending some time to help. But yeah, I guess that any external contributions would be great, especially with the dating beats once they are available, providing feedback because all the flow is open source. So any audit, any comments would be much appreciated. So yeah, as, so, right now we are really in a moment where we try to audit and test that everything is working fine. And so yeah, if you have any experience with communities or Java, whatever, that's probably one of the good time to jump in. Another thing that I really want to identify and be sure that everything is aware is that I want to be sure that at least two person knows and really understand the different parts of this project. So either from the infrastructure parts, either from the environment, the services, the process or whatever. And we also need to be sure that several people can change passwords or understand how the credentials are working. So if you have any questions, concerns or whatever, that's really the right time to ask. And so we will be able to address this. Yeah, this is currently my priority. I would like to finish this work as soon as possible. So this is my focus for the coming days. So if there is no other questions, I propose to move on the two different, two other topics that I put to the agenda. So there is one regarding Rackspace. So you may saw the message on the mailing list. So Rackspace was a sponsor of the Jenkins Project for the last 10 years, something like that. And in November, they announced that they stopped the sponsoring open source projects. And they came back to us last week, proposing a new sponsoring, a new sponsoring project. So the idea would be to not be sponsored by Rackspace, but be sponsored by Spinup, which is a different interface of the Rackspace. But we have to, we still have to see what would be the condition, but basically they would provide some computers for the coming years. This would allow us to not work on archives and Jenkins yet at org. And we would have to save some time. This is more than welcome at the moment. And we don't have to do anything else than that. So this way it grids. And regarding CI, regarding CI. We are still in the process too. So we are now using Amazon for eCube, for Linux machines and for Windows machines. So we're using the eCube plugin. We are having a really weird bug at the moment. For some reason, the machine stopped working after Y. And so we just have to relaunch the engines. The machine is working fine. No, this space issue. I mean, the machine is really, I mean, really totally correct. So it just that they are disconnected afterwards. So we have to, yeah, we also have to work on this to understand what's happening in our case. It may be related to some latency issues between Amazon and Azure. But yeah, that's definitely a weird issue. And with the automated project, it's kind of difficult. So if you have any experience using Amazon and Azure together, you're also more than welcome to help. But we are facing some bandwidth issues right now on the different projects. So that's pretty old. So for the past week, any question? No, so I'm assuming I'm gonna continue trying to investigate those EC2 agents being unreliable after I get my initial checks done on the current prototype builds or the release automation outputs. I'm gonna try to write some tests to assert that those signing setups are correct. Watch the test fail with the current build outputs. So those are on my plate. And I'll keep watching those EC2 instances and restarting them during my hours. Thank you. Or reconnecting them. They're not even restarting it. Olivier, I think I understand that those EC2 instances are sometimes actually recycled, that they are destroyed and then recreated as machines, that they periodically go away and a new machine comes online. So basically the plugin is configured to request, the machine is configured to request a new instance when it's needed. And what's happening here is that, so the machine is correctly provisioned. It's correctly attached to the master. It's correctly used. And after a while, the machine is disconnected. So the machine is still there. It's still running on Amazon. And it's not even not deleted. Sometimes the machine are just running for hours and then reuse and so on. I'm not sure if I configure a timeout to delete the machine after a while, but last time I checked it was not working. But yeah, it's not a new function. It's not any more fresh in my head, so. No problem. Thank you. I can read the configuration and look at it myself. So that's not an issue. Thanks. You should have access to the Amazon account now. I do. So one thing to keep in mind that we basically have no single short agents and currently with Cloud API for EC2, you cannot configure them because EC2 implements Cloud API without hugs and there is basically no way to do that. So you say we have no single shot agents. That's what I think I heard you say, Ola. For EC2, no. Okay, great. Thank you. So agents might be recycled. Yeah. Even if you set one executor and shut down third right after completion. Okay. And that is different than what we have with the ACI agents there, right? Because if I understand correctly, the ACI agents that are running on Azure are single shot. They are single use. Is that correct? So the ACI are containers basically. So ACI are just a container where, so right now the account was configured to use Azure virtual machine, which is the equivalent to EC2 instances on Amazon. And we are using ACI on Azure and that it would be to either move on Kubernetes or maybe use the Amazon alternative of ACI and just, yeah. Super. Thank you. Thanks for the clarity, Ola. And yeah, and I was relying on the EC2 instances being reused that my mental model was that they are static agents for at least the lifetime of the virtual machine. I just see sometimes the count of machines connected is less than at other times. And I had assumed that was the EC2 plug-in destroying and recreating. The EC2 plug-in destroy instances when you don't need them anymore. Got it. Thank you. The only thing that I noticed is that if the agent is attached to the master, it's used and then either the agent become in a disconnected mode and then the agent is not cleaned up after a while or it just idle, just nothing that has nothing to do. And then the EC2 plug-ins correctly delete the agents. So that's why sometimes you see five, six, 10 machines. Another thing that I also did was to reduce the number of Azure instances that you can deploy. The main reason to this is to mainly to better control our costs on the Azure accounts. Thank you. Thanks very much. Okay. Otherwise, if you don't have any other questions I propose to stop the meeting here. We don't have to hold this meeting for 30 minutes if we don't have to. Yeah, one question is about security audit. So Mark had to drop. Okay. But he might need some guidelines and some access to do that. Okay. So if we could summarize what exactly needs to be explored and how access or reprivision this data. Okay. It would be great. I contact him and I discuss with him how we can help and what kind of access he needs. Okay. Okay. Nice. Thanks everybody. Thanks for your time. Have a good day.