 Hi everybody, so there are a few things that we need to discuss today in the Jenkins Infra meeting. So the first thing that I want to announce is that we officially switched the mailing lists from Mailman to the Google Groups. So now every post sent to the Mailman, so to the old mailing list is still accepted, but you should receive a message saying that that mailing list is deprecated. And now we are invited to use the Google Groups. So we'll keep working this way for the next few weeks, and then we will just switch the mailing list in recently. One question, I'm not sure if it's been fixed, but when it was first set up, if you click to reply, it's set to reply privately, it's not set to reply back to the group. Yeah, definitely. I changed that this morning based on the comments made by RK. So now if you reply, it's reply to the work group. Yeah, one thing about the current state, does it still block messages from non-mailing list members? You mean on Mailman, on the old mailing list? So on the Google Groups. So on Google Groups, now if you let the state post at the state one week ago, if you sent a mail from a mailing list, from a mail, which is not a member of the Google Group, maybe it bounced, so maybe we could allow emails from everyone? I'm fine. I'm fine with that. It's definitely an option where I can just say if you want to send an email in this group, you need to be registered or not. I just applied the same configuration than the Mailman by default, but I'm totally fine to not force people to register in the mailing list, so I'll change that. Maybe a related question. In other words, standard email for tickets, for jingling their jam, mailing list, or other Mailman ones. Do you have some time to handle that? I don't have the time to work on that tomorrow, but if you write a geraticate, I can do this either by the end of the week or next week. The geraticate was submitted a few weeks ago, that's why I'm asking. If you can put the geraticate in the Google Notes, then I will find them, so I'll go. I just take the different points that I specified in the database that I want to do. The next thing is, I have a working application for the mirrors, so it's correctly unreported Azure. The thing is, either we replace our mirror brain with that new tool, mirror bits, or we have another option which is using Fastly for the CDM to distribute packages. I'm discussing with the SSTM on Fastly sites, and they would be interested to sponsor the Jenkins project, and the idea would be to have a contract the first year, and then we have a contract that will renew months after months. We could use Fastly to distribute all packages and also improve the speed of our websites like Jenkins.io and PlaginSites, but I still have to plan a meeting with them. So that's the current space. So either the Fastly solution works, and then I stop working on mirrors, or Fastly does not work, and then we have to replace mirror brain with, for example, mirror bits, which seems to be working very well and skin way better than the current solution. And so if we switch with mirror bits, we could just, for example, enable HTTPS mirrors and just drop HTTP by default. I think for the users, though, the CDM would be a much nicer experience, it would be a lot faster. Fastly, definitely, Fastly will be the best solution, because right now we don't have a lot of mirrors, and all the people who asked to sponsor, I mean, to provide mirrors in the past few years, each time said that we want to use the CDM and we don't want the mirrors approached anymore, but now that you have to pay for the infrastructure, the mirrors approach it's way cheaper, obviously, than the CDM. So I have the feeling that if we can work with Fastly, then at least for the next year, it will be a bit better. OK. That's what's regarding the mirrors and Fastly updates. Regarding another thing that we discussed several weeks ago, which is enable third party access restriction under Jenkins Infraorganization. So right now, any person who want to use the GitHub, for example, to integrate with a third party by default have access, by default, provide access to the Jenkins organization, Jenkins Infraorganization. So an example of that is Gitter or Sentry. And so because we never configured that restriction and the Jenkins Infra aside, it means that by default, we allow every accesses. And so the idea would be to put in place some restriction. But once we do that, all the token that we generated on the Jenkins Infraorganization will be broken. This means that we need to take some time to have enough time, like in the days to review all the application and be sure that we renew all the tokens. So this is something that we'd like to work on next week, something like that. Do you have any experience with that? No? Yeah. So this is something that should be coming in the coming days. The second, the next topic is regarding the Rackspace sponsoring. So as you saw, apparently Rackspace does sponsoring OSS projects. And so now we have to pay for one of the machine that we have in the Rackspace account, which is Ocrop. And that machine is used for Archive, the Jenkins CI.org, which is our fullback service for our packages. The thing is that machine is totally managed by puppets. So it would be trivial to move it, to move that machine in our Azure account and then configure it there. Anyway, in the current state, it's just like we stop paying Rackspace to pay Microsoft if we do this, because in the end we are paying now for Microsoft. But at least it will simplify the bidding process, because right now it means that KK has to be reimbursed from this SPI. So it would just simplify this management, do it simplify the management from a KK point of view. And so I think it would be really trivial to move it. So this is something that we should work in the coming days. Any question on this specific topic? Otherwise, regarding we have several work happening on the Jenkins CI cross-stache Azure account. So this is what you just spent some time. So if you want to explain all the work that you did recently on that thing. I've done a couple of bits recently. One was to try to upgrade the Terraform that's used in the Azure repository just so that we can clean it up a bit and fix some of the issues. I don't think it hasn't been upgraded in quite a while. And the other is to start looking at packer images so that the Azure VM agents can start up quicker and so that they can be upgraded easier as well. So yeah, this is really nice, but yeah, that's really nice. The only thing that we have right now is because we stopped building the Terraform code from CI to Jenkins.io. It means that we have to manually trigger it and be next to the job to be sure that everything is working fine. So I think we should split the Terraform codes for non-critical resources that can be updated automatically. And if it's deleted by mistake, it's fine. And for example, don't delete it by mistake like Kubernetes cluster. So because right now it's not really, it's not any more transparent. So it's not easy to see. There's too much stuff there. Yeah, so I think we should definitely try to split that repository in different repositories. So for example, what I have in mind is all the part regarding the DNS configuration could be easily automated and generated, for example, from CI to Jenkins.io, for example. So yeah, in your mind, Olivier would CI to Jenkins.io then have the ability to mutate those resources in Azure? So either we could do that or we could redeploy a smaller Jenkins instance inside a VPN so we can delegate the access to someone. So all the people who have access to the VPN can change the infrastructure. But yeah, the thing is, for example, there are some resources like, I mean, I just said the DNS. If the DNS is deleted, monitoring will complain and it will be really easy to redeploy, which is definitely not the case for machines like the VPN or Kubernetes clusters or whatever. So there are resources that where we need validation to change them. Yeah, I agree with that point. I don't know with the service principle that we create for Terraform, how granular of access we would be able to delegate. Because right now, Terraform is using an app and service principle that basically gives it root inside of the Azure account. If we were able to create the service principle such that that Terraform pipeline would just be able to access Azure DNS, then I think that would be totally, that would be fine. I agree with your point there. My only concern is if that credential can be used for mutating other resources. Yeah, so you raise a valid point now, which is the way we manage the access. And right now we have a really basic way to manage it. So basically we just use a basic group in Azure. So either you have access or you have admin access, whatever. And if we want to have a better control of who has access to what, we should probably use a better version of the Azure Active Directory, which will cost us a few bucks. But I think it would be better to delegate the permission. And so, for example, we would be able to just say you have access to recommend this cluster, for example. So yeah, this is something that I think we should work on. Yeah, it would be very good and to be able to define growing access control. Yeah, because right now it's really like you have access to everything or you have read on the access. So I already gave reading the access to few people, but yeah, those are really trusted people. So I really think we need to find great control of who has access to what. So I think if you have to create a ticket for that specific thing. Yep, I now realize that I totally forgot to mention something regarding the infrastructure and the way we are paying it. So basically, the CDF agreed to pay the bill for our infrastructure for the next six months. But we need to go below the 10K per month. So they accepted to temporarily pay the 20K per month, but we definitely need to reduce the cost of our infrastructure. So if we have some monies from Amazon, it would be already easier to manage, but we have to find some ways to reduce the cost of our infrastructure. And sometimes, of course, means May, because from what I heard, it's six months. Yeah, so December, January, February, March, April, May, yeah, so the plan is May. Olivia, you mentioned in the chat, and we've talked a little bit about this before, setting up AKS or having agents spin up on Kubernetes. Do you think that should be done before or after we convert CI Jenkins IO into configuration as code? I think it should be done after because we are, I mean, right now, our Jenkins file are designed to be running and using Docker directly running on the machine. So I think it would be just easier to use the ETH deploy and deploy machines. And so it will reduce the cost on our Azure accounts. And at the same time, we should also work on the configuration as code. So then other people can just refactor or maybe use specific labels for specific repository or whatever. But yeah, we should just switch from Azure VM to SC2 instances. My main concern is about using the architecture because we saw in the past that sometimes we have some latency between cloud providers, but otherwise, yeah, this is everything. So basically for years, we had the master running in Amazon and the agents running in Azure. And now we have the master in Azure and we are moving the agent to Amazon. So yeah, that's my concern. But yeah, it will be just easier to just use the ETH to begin. And otherwise, that's pretty old for me. I don't know if you want to talk about something specific. Have you heard anything about the certificate for the automated release? So that's a good question. And last time I checked, it was no, not yet. So yeah, no news on that side. Again, I should check. So usually I just think in the, in this case, like Yeah, I saw you asked a couple of weeks ago and no response. Yeah, since 6th February. So no response. Yep. This is something that I should bring him directly. So basically, if we have to talk about the automated release now is because we use the cluster, initially we deployed the automated release environment on a Kubernetes cluster that we also use now for all the public application. So we have to deploy a new Kubernetes cluster in a private environment and just use for trusted application. So we have to really play in a cluster and really play the environment in a more secure VPN and more secure network. That's anything that we need to do. And we still have few things that we need to work on. So for the release parts, it seems it's working for the packaging part. We still have to work on the way we publish artifacts. And this will be mainly influenced by the fact that if we have fastly, we don't have to handle mirrors. So yeah, the way we will distribute packages is not really clear at the moment and should be better in the coming weeks. In a similar vein, and Tim, thanks for mentioning the release certs because I had forgotten about that. I've been watching Koski and JFrog go back and forth about Artifactory. I had also seen an alert around JenkinsCI.org certs expiring. And I don't know if that was... We've got two sets of manually created certs that we've made in the past. There was the wildcard JenkinsCI.org certificate and then there was the Artifactory certificate. So it looks like... So Koski has already created the repo JenkinsCI.org certificate, correct? Yes, for that one he already created and normally should be already configured as far as that's concerned. And otherwise, the other certificate is for ADAP service. And the reason to this is because when I refactored the ADAP container, I used the static configuration. So if we want to have dynamic certificate, like generate with the sacred, which is something that could be working now, because in the past, we were using the HTTP method to generate certificates. Now we switched to the DNS01 so we can have certificates even in private environments. So we could have a certificate for ADAP Jenkins.io, but we need a way to to enjoy that configuration in the ADAP container. So theoretically it's possible to use with a new ADAP configuration, but they totally changed the syntax and I don't know it very well. So what I'm really getting at is like, do we need to go buy more certificates? Because I think the expiry that I saw in an alert was 15. Normally those there are two certificates that you need. So I just have to check for the ADAP one. I did not know that it was a wild card. But I have to check for it. Okay. Will you let me know tomorrow or something? And I can go buy the certificate and put it in the right place if we need it. When is the deadline? I think the alert that I remember seeing said the 15th was the expiry. Okay. But I couldn't find that alert if I tried right now. Okay. Okay. So I guess we are good now. If you don't have any other things that you want to discuss, we can stop the meeting here. Has anybody discussed with developers about moving away from this JFrog online service? Or are we just going to leave it as is? Is there any reason why you want to move away from that service? I mean, for me, does it work? Don't we get the certificate installed? It wasn't clear to me that it was. It's not been installed. Yes, I agree. Artifactory has actually worked very well, except when we have to ask them to do it. We went through the same exact same problem the first time when we installed the first certificate. So they clearly haven't gotten any better. But by three year certificates, not two years. I thought that because we gave both new certificates instead of the different ones. So I thought that it was sold. They didn't manage to figure out. They couldn't update it without the key and they wanted them to send them the key. Whoever is working in Artifactory and JFrog support clearly doesn't understand how certificates work. I have to review this. It's not something like there's not an emergency here. I don't think we need to run away screaming from Artifactory. But the recent history show us that we need to work on that quite in advance to be sure that the work isn't part of the line. So anyway, we have to figure out. Tim, you're just looking at that certificate. It seems like do you know when it expires? No, I'm just looking at it from KKZ. I think it was March. It started quite early. Just pulling it up. It's March 2nd. Which is not that far away. No, it's closer than I thought it was. Okay, I guess to me what it sounds like is if somehow JFrog cannot get their shit together that we might want to send an email to the dev list to let them know that they may see an expired certificate and it's okay. Don't freak out. Before it happens, I think we can escalate it within JFrog. Because if needed, we have ways. We have ways. There's always Twitter. We have better ways than Twitter. Are you sure? If I had to Twitter Tim, it would be the best way. I think the CloudBees executive should have some direct lines to JFrog. They're all in the same WhatsApp group. Okay, so I propose to stop the meeting here and go back to RSE. If you agree to add any topic that you want to discuss next week in the Google documents, the video will still be available. See you later.