 Hello everyone, welcome to the Jenkins Weekly Infrastructure team meeting, we are 19 of March 2024. Around the virtual table, we have myself the menu portal, Hervé Le Meur, Marc Waite, Stéphane Merle et Bruno Verrard. We are 5, perfect. So let's start with the announcement. Weekly release. So the war and Docker image are out, if I'm not mistaken Marc. Correct. Out, other packages are being synced to Mirrors. I believe it's the version is 2.450. It is, yes, that's correct. Is the change like published to Jenkins? I always there's some steps. It should be. I'll double check now that it is. That's a good question. It is now published to Jenkins.io and the permalink version is also available. So yes, 2.450 is ready there. Perfect. Container images are likewise should be published by now. I'll check just to be sure. You can continue and I'll let you know if there's a problem. Perfect. Thanks Marc. On the announcement. Reminder. That OSU, OSL, Mirrors are being. Are down. So one of the two is down, right? Yes, exactly. And the container images are available and have been available for two hours. Cool. So that means Stefan, you have the go for updating images. And it might have impact for tomorrow's LTS packaging phase. We'll have to wait one hour for propagation. Is that correct? That's that certainly works better, right? Because the due to FTP, due to OSU, USL, not getting it immediately. It slows the propagation to the other sites. Perfect. But thankfully it doesn't stop it, which was for me quite impressive. It just seems to slow it. Because last week we had the same problem and. Six of seven mirrors all got the, all got the last week's release within 24 hours of the build. So it appears none of them are strictly dependent on exactly one of those OSU, OSL instances. Yep. That's how I understand it. Cool. Is there any other announcement folks? Nope. Okay. The calendar. We should have 2.451 next week. It will be 26 March 2024. Next LTS will be 2.442. And it's tomorrow. So that will be the 20 March 2024. Chris Stern is release lead and he's on track. Okay. Mark, may I ask you while I'm taking notes just to pink crease tomorrow to let us know around which hour he wants to start usually. When it's Alex or him, it's better. They start the release as soon as possible on their day since it take two to three hours. So then the beginning when it's morning for in France for Stefan Hervé and I. Then we can start the day with the packaging to watch and the docker images. I will do that. Instead of waiting later. Is that okay for you? Yes. Thanks. Folks, is there anything else about the next weekly and next LTS? Okay. Let's have a quick look at the Jen Kinsey advisory announcements. We had one. The six four. So no announcement. So nothing. Nothing here. And the next major event if you want to meet the team members should be devox France. I'm not even sure we will have a presence on the devox France in Paris. CD con CD con is April. 15 through 18 or 16 through 18. In Seattle. Oh, nice. And I'll be there. As will dirage sing Jota. One of our former documentation contributors. He's presenting a talk at GitOps con. Nice. And you said devox France. I don't remember. I can give you the date for that. Devox Paris. 17 through 19, April. Do we have a presence? I'm not aware of one. Are they were you planning? Had you, I don't know if air bed crab to ticket to devox or not. I'm, I'm definitely not going to be there. Adrian, maybe. No. No. Okay. Okay. So nobody from the team at devox. I propose that we just remove it. Right. Okay. I'm looking because I might have to travel a weekend to notes. But no, that's the week after. So I won't be there either. Let's remove it then. Anything else on calendar or announcements? Okay. So now let's have a look on the task we were able to complete this week. I'm going to take them on the order on the notes. That's easier. Prig inside build is failing. Irvi or Stefan, can one of you give us a summary of what happened? Last week, we switched to building this other plugin sites. I'm deploying from a part on private K8S. 8S. And last week, we switched switch this job to Azure virtual machine agent. And as this VM doesn't have, doesn't don't have access to the file share. It was 64. So they were failing. The job was failing. We opened the, we authorize the GM virtual net with the file share. And now it's working. Allow to access the production file share for their network. Fixed and green. Award on the post mortem. So there has been discussion inside the team on why didn't we cook that problem earlier. And proposal to improve that in the future. If I understand correctly, Irvi, you proposed in the area of the monitoring private instances task. You propose to add to that kind of builds. A kind of export report. For each build, the status and date time. So we could have data. Monitor that will say if it's failure or if the data is ordered on 24 hours, then send the page to the team. Is my understanding correct on what you say there. Yes. As we don't have that many important build to monitor. We can use it more easily than my previous attempt of reporting private instance jobs. My previous attempt needs too much permission to be a viable option. So I like that. So it's like there's a mailbox or a directory, a folder where we drop job status from private into a public readable location and then data dog just reads the public location. Yes. I like that. That's elegant because then that yeah, that's okay. If someone were to compromise the public location, all they're doing is compromising the data. They're not getting access inside our jobs. Nice. Do we have another post-mortem for this one? Because I mean the root cause is something that can happen. In that case, it was a tricky one honestly. Now we know and we improve the Azure network restriction by adding comment on why is this network load and why this network load etc. I don't see something that could have been done differently. Me checking on the main branch. That's part of the learning process that's an area you never worked with before. So that's why I think the real actionable for the team that we can really do instead of saying we should do better. That's an actionable. Let's say, let's have a monitor to protect us. So everyone will feel better because we know that if something break and we miss it because whatever human reason, then we catch it early and we are proactive. Right. I agree. Yeah. Well, I mean, it's that idea is even almost a general purpose thing. Could we configure every private job to report its status into some tree on reports.Jenkins.io. That's exactly what I've done. Oh, good. My attempt previously, but the big, really big issue with that is I'm using a Jenkins token. So it's a no go. Ah, nice. Because we can't manage its life cycle without requiring high permission. Got it. All right. So that will be worth a plugin. And I believe you search and you saw a plugin, which is on maintain. Alas, there was a plugin that would add a post build steps to every pipelines that could have done the job. But that's an old plugin. Yeah, my suggestion here is to add a post step to reports, but it's imply modifying pipelines. While my previous attempt was using Jenkins API without needing to modify every testing pipeline. Okay. Yeah. Got it. And anyone watching that want to help us to take care of that plugin to have an automatic after work. The task is more than welcome. Or Mark or Bruno are really in a good mood. And so the proposal is to start with that simple one because the amount of job to be modified is quite low. Right. We have a few one in infrasia. You a few one on trust it's but that's something that doesn't change that often. So yeah, it won't cover the case of the new job auto changes. But at least. We will have a something on important job like this one. So that will be a first step improvement. Any other question on this one. Thanks for the work on these folks. Thanks survey because you cocktail this morning and you did additional ours today on this one. So many thanks. La prochaine, SSL Certificate en CI Jenkins IOU et Assets CI Jenkins IOU n'était pas renouvelée, donc merci Marc d'accueillir ça. La route est passée, donc c'est ma faute. Chaque fois qu'on crée une nouvelle machine virtuelle, cela requiert des données migrantes, j'ai toujours oublié que l'Encrypte ait créé un nouveau compte pour les certifications de génération basée sur le nom de la machine virtuelle depuis que ça a changé. Le problème doit être fixé sur le modul de l'encrypte de Poupet, mais cela requiert de Poupet 7 ou 8 pour être fixé. Parce que l'encrypte doit être installé pour sélectionner l'un de l'account qu'on n'a pas fallu. Donc, sur each VM migration, notre pét module pour l'encrypte a créé un nouveau compte et a fallu, parce que l'un a fait ça. Donc, j'ai fixé ça manuellement. J'ai essayé d'avoir les logs, le file que j'ai regardé, donc si quelqu'un d'autre a l'issue la même, vous pouvez le réparer, prendre l'issue, chercher pour elle, trouver, et voir ce que j'ai fait sur la machine. J'ai retiré le secondaire d'accounts qui a été réveillé et réveillé sur les certifications. Avec des détails sur l'issue. Un mot sur le postmortem. Je crois qu'on est encore dans le besoin d'évoiler un marque basé sur le monitoring. C'est vraiment utile et merci pour ce marque, mais encore une fois, nous devons avoir un marque basé sur le monitoring et l'évoiler pour ce genre de choses dans le futur. Donc, j'ai vu l'improvement ici, ainsi que les h.e.s, avec le marque basé, et la majorité, en attendant pour le marque basé. Parce que si vous perdez l'électricité sur votre base, nous serons dans les deux défis ici. Si je perds l'électricité sur mon base, je n'ai pas un problème très sérieux. Le projet Jenkins a un problème moins sérieux, mais j'ai un problème très sérieux si je perds l'électricité sur mon base. Est-ce qu'il y a une question ou besoin d'électricité sur cette issue? Merci Marc pour le résumé. La prochaine, il y a un test-project sur Gira qui a besoin d'être retiré pour évoiler les newcomers pour ouvrir des problèmes sur le problème. Merci Alex pour le mentionner et le résumé. J'ai retiré le projet et c'est terminé. Nous avons eu un issue de permission, un issue de permissage de plug-in. Donc, ça a été traité par le JNKINCI Admin, si je n'ai pas mis le temps. Un issue de JNKINSPASSWORD, bien sûr, ça a été fixé. Stéphane, Kubernetes 1.27. Qu'est-ce que c'est? C'est fermé ici. Vous savez ce que j'ai oublié d'appeler le logo de la Kubernetes. D'accord, c'est terminé. Nous avons terminé ce matin avec le public K8S sur Azure. Nous avons fait le public K8S hier sur Azure et aujourd'hui, le public K8S s'est passé très bien. Nous avons pris l'advantage de cette opération pour accéder à la version de Falco et de Genix. Je suis désolé. Chartes. Chartes, oui. Genix. Grace, oui. J'étais en train de regarder Grace. Partie de l'opération. Donc, vous devez avoir le logo sur l'issue. Et nous devrions penser à Kubernetes 1.28. Je propose que nous donnons une semaine de cool-down et la prochaine semaine, nous commençons à créer et planir l'issue. Est-ce que c'est OK pour vous, Stéphane? Oui, c'est parfait. Est-ce que je vous donne la responsabilité de faire ça la prochaine semaine? Oui, merci. Kubernetes 1. Si vous voulez prendre soin de la prochaine Kubernetes 1.28, je n'ai pas de soin. Je n'ai pas d'advices ou d'opinions. Je vais encore vous réveiller pour faire sure que je n'ai pas d'advices ou d'opinions. OK. Cool. Réveil, c'est terminé. Et nous verrons la prochaine semaine. New Mirror en Singapore. Hervé, pouvez-vous nous donner un petit statut? Je ne me souviens pas. C'est 3D, je pense. Je pense que je ne sais pas. C'est le nom de... 3D? Oui, 3D. C'est scarif. Ils sont revenus deux ans plus tard, à Telasset, ils étaient prêts à poser le Mirrore. Et après ajouter leur information sur le Mirrore, il a pris un temps, mais le Mirrore était disponible. Et il se tourne bien. Cool. Donc, un autre Mirrore pour l'utilisateur anglais. Merci pour ça. Et la histoire ici est belle, parce qu'ils sont, par le GOIP, seulement quelques kilomètres séparés d'un à l'autre. Ainsi, ils seront utilisés à peu près 50% par tous les consommateurs en Indie. Donc, ce que nous avons fait, c'est réduire le load sur le Servana à peu près 50%. Donc, c'est très bien de eux d'avoir fait ça. Et c'est un très, très bon improvement. Quick question, Marc. Est-ce plan de mettre le Mirrore pour l'utilisateur anglais dans la page sponsor de Telasset? Oui. Oui. Donc, le Mirrore, les providers de Mirrore sont un secteur séparé d'indépendance des laitres que nous avons proposées, il y a plusieurs laitres. Et puis, en plus, il y a un Mirrore, parce que c'est une très différente chose. OK. Pour ajouter, c'est à dire qu'on doit penser sur la page sponsor. Est-ce que j'ai l'impression que c'est correct? Oui. Nous n'avons pas besoin de penser à ça. Le gouvernement a un item d'action que nous devons réviser comment les sponsors sont présentés sur le jankins.io website entièrement. Parce qu'on a un modèle très simple que nous présentons maintenant. Il n'y a qu'un gros logo, ou pas. Et il y a des niveaux différents de la sponsorisation de jankins. Et nous devons montrer ça plus clairement. J.Frog est un sponsor très grand. Cloudbees est encore plus grand que ça. Donc, nous devons montrer ces choses particulièrement. Et Basel Crowe a un prototype qu'il travaille en partage de l'équipe du gouvernement. Cool, merci. Une autre question sur le Mirrore 3D en Singapore. Ok. Nous avons quelques issues qui ne sont pas fermés. Donc, il n'y avait pas de plugins qui se sont révisés. Mais c'était parce que les sites de plugins ont été révisés dans le Centre de l'Update, mais pas visibles sur le site de jankins.io. C'est correct, Hervé? Can you repeat the question please? Sorry. Yeah, it was because the plugin site was wasn't built. Ok. Request latest plugins e-scoring application logs. So that was something opened by Adrien and same root codes. The plugin site wasn't built. So plugin e-score did its job and computed. It's just that the publication wasn't built. Well, and Adrien's notes remind that there is another publicly visible location where you can read plugin health score. Plugins.jankins.io is very convenient, but there's another public location where people who want to see it even faster can see it. Cool. The good thing to provide to developers. Then free Jenkins account stuff with either no answer or expected to be on account. Jenkins.io And a whole issue was closed around bridging IRC channels on MatrixGear. But the bridge doesn't exist anymore on IRC neither for our channel. So no need to do anything. Anything else on the closed or done issues? Ok. So let's go on the walk in progress. I'm gonna try to follow. So as usual let's go. The top priority for us is the update center. Array, could you give us a quick update on what's the status for the new update center with the work you did last week? The pull request synchronizing Azure file share and there are two buckets in parallel of the current storage is merged and I've configured the update center jobs on trusted Jenkins.io to run this thing in parallel and to run every 5 minutes instead of 3 minutes as the job takes a bit more time now. The storage are in sync with the current used one. We can now do more validation and testing. Nice. Nice job, so that mean we are going now to prepare the test phase. However, just a question what's the status for crawler? Is the crawler updating at least once a week? Yes, since a few weeks or months already. Updating everything to install metadata. Cool. So the next step if I understand correctly or start testing the new mirror system to see how Jenkins instance will behave. You did an initial test during the first phase. So now the idea is to start eventually consuming the new mirror system with our own controllers. Performance benchmark benchmark of the new system and we need to walk on the JEP particularly one of the points we need to address outside the usual that something we started to discuss earlier this morning with Hervé. What will be the fallbacks if we cannot pay for Cloudflare or if Cloudflare doesn't want to sponsor us. We need exits if this happen. So we need to describe the different use cases here. What are the ones we can afford? What are the ones that require creating sponsorships? What will be the eventual cost if we use our current infrastructures? You mean adding a node not from them? For instance but that's what will be the fallback if we have an issue with Cloudflare. We need to cover this element as part of the JEP. I would have had that before. What do you mean? I would have had another node from another company to provide that fallback. That's implementation detail. That's what we have to discuss and write and propose in the JEP. Since neither you or I did a full powers review of what Hervé already wrote on the JEP we have to do it. And plan for a run out we need to start planning to use the new system during one hour at a given time and see what will be the expectation. Is it breaking the installation of the user? And then to see the result if we can have a full day run out next. Did I miss something? Is there something else Hervé that we should plan for that task? No, it looks good to me. Ok. Damien, just to be clear the new mirror system is testable already. Right? It's that I can get to it and see it on Cloudflare for experimental purposes. Yes. So not directly on Cloudflare. You would point that the mirror system on Azure that would redirect you to Cloudflare because that's the only available mirror. Great. Ok. Thank you. Hervé, I believe you will continue on this or do you want to start passing tasks except the GEP parts that we need to review since you wrote? I'm ok to continue on this. Cool. Next one, StealerV, Blobixphere and AZ Copy. What's the status for this one? I'm still working on your script update to add AZ Copy in parallel of Blobixphere We it's not ready but should wait after the LTS release anyway. Waiting for after the LTS release Poopette and digging history. Yeah. When this will be ready AZ Copy will be will have replaced Blobixphere the next steps will be a cleanup of old storage account and permission of a big cleanup. Now you were cleanup removal of Blobixphere and storage accounts cleanup. Is that ok. Cool. Do you seek to continue working on this? Do you need help? Do you want to share the burden? I can continue on this. I might ask for help when I get full list of cleanup tasks. Just a note for everyone that task generates a lot of things we see to improve on the Poopette area. We try to do this as distinct tasks. I believe we could open issues or what do you think on the Junkins Infra-Poopette repo separately from LDesk because this will be improvements such as moving back script to Poopette having the whole mirror brain script things again managed by Poopette until we do something about that. But my proposal is here we continue with the repository which version scripts can be have pull request and is auditable. We switch on PKG because it's not a blocker and then we will improve on Poopette later. Anything else on the Blobixphere by AZ copy? Congratulations. Yes, that's a huge work. So thanks for the work on this one. Stefan, RM64, what's the status? I would have to go a little back but I did one of the image which is shared between Infra-CI and CI Jenkins.io Docker Web Builder but that's the one that bumped into the wall of the not allowed for plug-in site so I would have to double check for the steps on deploying to make sure that the new agent either a pod or a VM got all the rights everywhere but I got a whole other Docker image to switch to RM. So it's working progress. Okay. Then Jenkins.io Additional work Labels, etc. Maybe we will have to match some labels between CI and Infra but we need to make sure to use the correct names. So we could have unified pipeline because the challenge here is to ensure that we have the same environment on both? We can still have the same pipeline because we can use the double pipe version to have both names from Infra and from CI but we have the exact same name and the exact same kind of agent to be homogenous. The goal is to avoid having two different environments between Infra-CI and CI Jenkins for the same built. To be homogenous? You don't say homogenous? Yep. That is the correct word. Thank you. Because that's for the example plug-in site issue caused by one of these changes so that's why it's a set of complex changes with impacts but that's for the better so we can streamline the update of the toolings. I think Hervé cooked an hidden error which is not an error because there is a pipeline we have a missing tool I don't remember in plug-in site something typo and typo check-style typo and typo check-style so that's just a matter of adding them to pack your image and delivering them and seeing the results yep Warning there is no arm64 release Nice So maybe we will have to to search or to do something about that We need lint anyway Stéphane, you mentioned another image on next steps another docker image or I think you covered all right we see you but we can't hear you looks like you have a sound issue or is it only me you got the best part of me I mean it's good so I forgot the name so I will have to click on the link no worries it's here and I'm working on Linux GNLP Linux Linux GNLP yes it's GNLP-Linux Agents GNLP that one should be easy I don't think we should have any pipeline I will never say that anymore I saw and I might be wrong but I'm I'm sure I saw docker ashy corp tools job on infra-ci so that one need to be cleaned up because you already did the heavy work as far as I can tell just a note here on the issue or do it myself just a note so it's written somewhere thank you nice work anything else, any question or clarification on RM64 for the Agents almost there docks genkin sayou airbay I believe you didn't have time to spend on this do you think you should be able to do it for the next milestone yes I think I at least start on it ok let's continue milestone I want to bring up service principle used by infra-ci to spawn asier agent expires Thursday so that one will be top priority here airbay can you give us something an explanation on this one so we have a service principle which password will expires on Thursday it's used by infra-ci controller to spawn agent in asier so yeah it's in the definition of this service principle so we need to update this password and then update credential chance private repository new credential update secret deploy test un profit so we will have to do it this week is there someone with a particular willingness for doing it this I don't mind taking it it's just if someone was really interesting into doing this I don't mind doing it in pair but that one does not require pair or team sync as far as I can tell if there is an issue I will ask one of the two of you for pairing with me if needed but most probably that will be a cool request to review any question? we had an issue about IPv6 and MTU my proposal will be to close it we have a user with a unusual MTU value and when they use that MTU on the network that means the packets are fragmented in the entrance on the asier virtual networks because they use 1500 so that means you don't have the same packet size so packets are cut and as I shared with them on the documentation from azure we don't have any actionable here we cannot control the MTU of the world data center of azure microsoft so yeah I'm really sorry for that user but I don't see any actionable for us to help them especially because microsoft recommand not tuning the MTU otherwise their accelerated network will drop packets which is what happened in that case in that case they have a solution they use the IPv6 to 4 tuner system right now for testing they provided me a nice website tunnel broker where I was able to reproduce their issue but there is no reason to have an MTU of 1460 so I don't see what we can do if anyone else has an idea I don't mind learning something but right now I don't see what we can do other than buying a data center and the network stack and hiring network engineers and doing everything ourselves we got everything in the basement of Mark already Mark are you ready to tune the MTU of your ISP? no I am not but thank you for asking my MTU is going to stay exactly but although I like my IPv6 very much so my proposal I've asked the user if they see other actionable because it looks like they know what they are doing somewhat at the low level so they might have actionable for us but other than that I propose that in before and off week I'll close the issue as nothing to do unless we have something that we collect is there any objection or discussion on that one? yeah I learned things along the way on IPv6 but yeah that's almost the same as IPv4 for network tuning should be closed and off week unless user has actionable for us alas we cannot tune MTU on microsoft networks and looks like fastly also is concerned by their issue so I mean we won't change neither fastly or as your data centers it's already good that we got IPv6 next issue on the list is someone with wrong email registration I've asked them to create a new account because they messed up the email I mean we use this it as a verification so unless they can prove but that's the wrong email we'll see if they create a new account then we will be able to remove this one because the date and time of creation maps what they asked but if they don't then we will close the issue next week as no answer back from the user thanks everybody for taking care of this one any question ? wait for user response how close next week if none we have a new Jenkins mirror in Romania which is brewing by ostico Hervé can you give us a status about this one we are waiting for the response and we have another one from RDC same waiting for the response too waiting for response I remember there was one they need to enable HTTPS and the other one they need to enable Ersync or FTP they need to put Jenkins folder ok can I let you create an issue for the third mirror so we can track them separately and have audit log because emails are really hard to read I don't mind using email for exchange with them but can I ask you to keep track of this unless you are full and busy and then you can hand over the mirror but let us know let us know if you are unable to create issue and follow the topics we have a whole set of topics around ACP the artifact caching proxy let me regroup them we have different elements here 3 issues are caused by ACP so the first one opened by basil has a consequence of the repo Jenkins cleanup we did a few weeks a month ago the builds are now slower because ACP is only caching element that comes from repo Jenkins.org since we removed the mirroring of Maven Central from that repo because we don't want to be used as a free mirror for everyone now ACP as a consequence is not caching elements that come from Central each Maven build from each fmrelagion download everything all the time from Maven Central basil proposer is to slightly update our existing configurations so that the ACP will start to gain caching these elements even if they come from Central which makes sense that should accelerate some of the builds so I propose that this one should be a quick one I'm removing myself right now because we need to discuss the others so we have an issue regarding incremental jars I haven't had time to dig on this one that has been opened by Uli I'm not sure to really understand and we need to dig I thought that the incrementals weren't cached meaning that there might be something wrong in the configuration of that job I don't think we cached incremental and the problem is that it shouldn't fail because if the ACP doesn't cached that means Maven directly contact repogenc in CI incremental so there is something wrong on that build that needs to be checked we didn't have time to survey Stefano High but we need to analyze this one that might be a job misconfiguration but that's only a good feeling finally the fourth and most important one is related to delays on building on small plugins on build agents that one is sitting here since quite some time we thought it was related to digital ocean builds that we upgraded Kubernetes version and retried but as basil and more confirm even if it work on CI most of the build are slow and are clearly slow during the dependency resolution phase that mean we need to improve the performances of the ACP I was a bit dismayed because I was really I had a negative time two weeks ago thinking maybe we should get rid of ACP but as basil pointed out maybe we could improve the performances it's working most of the time but we need improvement and performance improvement we need to work on this one in order to accelerate the build for a better developer experience and CI Jenkins IU and also the builds that are faster cost less for us on the clusters so we have different incentives here a service for a user that need to be improved stability and maintainability of the infrastructure and costs now that thing being said most of the problem come from running that service on Kubernetes it had so many layers at network level it's really hard to diagnose the initial proposal from AirV about A, why don't we use virtual machine here could be a solution because we on the paper in theory having a machine with only engine exproxium storage that should be simpler to investigate for performances that will remove ingress layer virtual network layer container network layer container system layers Ios storage etc of course that would have a cost we will remove the ACP from Kubernetes and move them to virtual machine the cost should be slightly the same however that would require a few network architectural thoughts particularly do we want to keep it HA and if as the case we need two virtual machine with a load balancer or do we manage the certificate do we terminate certificate at load balancer level we need to there are choices but that need to be thought and defined as an architectural platform process for each provider exactly each provider might have a different set of offers and also network level we want these machines eventually to be only reachable inside our private network maybe that would simplify the certificate part but still something we need to think about this or we need to diagnose the performance issues in Kubernetes so the first step would mobile data dog and log analysis in theory yes in practice I'm not really willing into dealing with performance issues on Kubernetes cluster that we don't manage because I don't have access to the lower level machines on most of these clusters so diagnosing this will be just a pain you will need to have direct observation of the system ok Kubernetes might or might not be the right way and since we have three different Kubernetes with three different operating system I mean what we gain in Kubernetes automation we lose on reproductability of the behaviors my initial proposal will be to start with a non highly available system with only one virtual machine on each provider and we start to compare the performances with the existing ACP that will be a first steps and then we see if we need to scale horizontally vertically or both but the thing is the person that will take care of basal proposal for caching artifact maven central should be the same person leading the topics here the rational here is the following that part from basal is how do we understand the service we provide and how is it consumed that's a request from a user and once we have loaded in our memory the user are using it like this we understand the pro and cons then we can start thinking about the architecture and performance improvement of the system we cannot start we cannot start like that because you are asking more to the ACP and having more work and more data and increasing everything and right now we are seeing that it's not in a good state in an enough good state yet we don't have formal proof we don't have formal proof that ACP has performance issue what we see is build are slow when it come to resolving dependency that's the only fact we have today and additionally we cannot know if it's ACP or the community test but at the end of the road it's the caching that we set up and that may be solved by using VM so what I mean is that increasing what we ask ACP to deal with should come after stabilizing here that you are missing is that we were caching these artifacts before the changes we did on repo Jenkins which mean for at least one year ACP was on that so ACP already under that load and right now the balance will be the time we will gain on caching artifact from Maven Central what we used to do is clearly an initial gain that we can have now on the build time I understand so doing both at the same time can be a win-win because we can save time for the user Exactly for both Irving and you the experience on Maven if you take that task it makes sense for you to first think in terms of what is the production what is the service what is the contract with our end user and that task is a perfect fit for really putting the hands here before starting thinking implementation because if we think implementation without having an extensive knowledge on how does it work in production then there is no reason you could come with a proper architecture because you need to understand how is it consumed before shaping something does it make sense is my explanation making sense or does it seems like too much I am not completely convinced yet but I think that my English is not good enough for me to explain exactly what I meant and I know, I think that I understand where you want to bring me or us in the comprehensive of that Maven usage I'm just saying this one is distinct from the other one however we need one of the three of us to take care of both issues leading them that doesn't mean the other cannot help it's just that it's a one world task on the same subject, same area and we need to avoid context switching that means we need to first have a clear mind on how is the service consumed and by what so then we can come with an architecture for improving the performances of the service if needed so that's why the other or at least slow it down and then we need understanding on where is it consumed so you can architecture and decide I mean there is no no need to start a virtual machine if you don't understand the service Yeah but you just said we need to first thing find why this one is slowing down and why we got that slowing down but before you said that there is no way to debug that setup we have so if it comes to ACP slowness but we need to demonstrate that it's slow down by using ACP okay I thought that he had done that I'm not sure I'm following so Damien I had assumed from Basel's description here that we would enable allow ACP to cache maven central artifacts as an independent step is that an independent step that we can take while still thinking about the other things or are they somehow dependent on each other no no no there is no dependency these are two different so we can we could enable caching of maven central artifacts by ACP and watch to see does it improve or not exactly so then if it improves and our problem is solved we don't have to worry about anything else right then we're largely done exactly thanks that's maybe a clearer way than what I said Stefan yes because for me that was pushing too hard but when you said we were caching that before that means that yes we can try and at least for a few hours maybe so if we and allow caching of artifacts from maven central is it just a change to the settings.xml file that we distribute or is there something we must also do to ACP to allow it to actually retain those artifacts theoretically we only have to do this in practice we will have to monitor and watch more than one hour or one day because as Basil wrote the variance happens after a lot of builds so we need to enable it for at least one week and run a bunch of builds to see if it changes something ok and I've only teared to run builds I'm happy to run builds and maybe we will have to increase the art drive we set for the caching though if we add a lot of that's something to watch a very good point because we used to cache these elements so the the study that Hervé did in the past should still be valid today we haven't increased the amount of plugins or builds that much but yes that's true but I remember Hervé and you I did monitoring on that on the load on this one I think it's just Hervé but yes I think he did so but that's a good point we should take care of this I'm sorry to ask way too many questions but it was not clear enough for me that's why that's a good thing to take time to read the issues and ask the question read the issue before the meeting and ask the question like you are doing during the meeting so we all agree on the goal otherwise it's only on my brain so you did good I had to read because I had to deal the meeting last week so and I did that's also prove my initial point that I believe that the person in charge of the architecture should start by that issue not only because it's distinct but also the person who will draft an architecture of the final system that can be keep the current one that person need to have a clear view on how is it consumed that's mandatory otherwise I don't see the point of you or are they spending time if it's not clear for you how is it consumed I don't see the point it will be on your waste of time for everyone I understand so my proposal is that we give us the team 24 hours before someone volunteers on that ACP set of tasks is that okay for you yeah so I'm taking notes here let's digest this for 24 hours and decide who want to lead I'm opening that to everyone because that might be an interesting topic but it's a big topic that mean if someone takes it it will take time an effort I'm saying that related to the context switch and the effort required here for this one that's why I don't mind multiple person on it but we need someone to lead to have a clear view on the subject I don't mind doing it I don't mind someone else doing it I don't have any opinion or preference that's why I'm opening the topic globally and I propose everyone digest and if no one volunteers I take it by default but if anyone is interested on leading the topic I don't mind having someone else leading and I will be happy to help if needed okay Is there any question on the ACP part? No, okay There is a request for a new job for plug-in ill scoring on C.I. Jenkins I.O. I don't know the status I think it's assigned to you every I haven't started on it it's creating a new jobs to generate reports from plug-in ill scoring where it can be consumed by plug-in site instead of creating plug-in ill scoring on C.I. Jenkins Okay, so that mean homogeneous pipeline is it only okay we might need to ask questions to Adrian what is the artifact generated by that job what kind of from what I suggested to him to be like user consumed data from plug-in site currently plug-in site is consuming data from reports not scoring IPI so we created a request a new pipeline in plug-in ill scoring to generate a report with the content from the IPI Yes, but that mean C.I. Jenkins I.O. should not be used for that because Yeah, this issue is might need some editing Okay, do you want to take that do you want me to take that and bring it to Adrian since you already have the docs Jenkins I.O. and az copy how do you feel about it if you want to see that with Adrian but I'm okay to keep that yeah okay so then that mean validate with the team because I have a feeling that C.I. Jenkins I.O. might not be the destination especially if we need to publish content on report Jenkins I.O. Yes so you are assigned we keep you assigned I propose you try this week and you hand over if you don't have time this week is that okay for you because that's a lot okay I'm doing my best we have C.I. Jenkins I.O. AWS account so using the credits that AWS gave to us so right now it's on my side bootstrap admins on the new AWS account next step decide who drive the creation of the new C.I. Jenkins I.O. E.K.S cluster so I propose given the workload I take that issue for bootstrapping the accounts and next week we decide we want to drive the topic of the new cluster that will be managed by Terraform with multiple subscriptions on AWS like we did with Azure the goal would be to move away the existing cluster to the new one to consume credits on the new account is that clear and does it make sense okay I propose to put on the backlog the new private Kubernetes cluster in the new sponsored Azure subscription the target was infrasci I propose to move it temporarily to focus on the AWS new cluster yes that will give Stefan sometimes to finish the virtual machines or changes on RM64 and then we can think about consuming these credits is that okay for everyone back to backlog for now AWS is the priority update please step fail regularly when processing Jenkins I.O. pull requests that one I propose to move to the backlog as well because it's not a blocker it's annoying I understand but the effort on splitting GitHub app and rate limit and stuff is not something we can do right now so unless someone object I propose to move this to backlog not a blocker even if annoying finally I added back the revoke upon VPN we should be able to have a lot of certificate revoked I haven't deployed yet I will wait for after the LTS but I found solution explain on the pull request on the issue so once the new VPN version is deployed we should have revoked the hold certificates mark be careful because both you want high and I think the four of us have hold certificate that we renewed and I've revoked the hold certificate based on the timestamps so only the most recent one for your user name should be valid so if you see issues when connecting your VPN then immediately report on the Jenkins Info IRC channel with the ID of your certificate and then we can check if it has been revoked or not great so when was that deployed sometime before the last say four hours it's not deployed yet I want to wait after the LTS to not block either crisis okay so when deployed then I should connect attempt to connect from my VPN on my machine if it works I'm sat happy if not I need help diagnosis got it thank you it's not it's Damian sport if not it means I'm using an out of date certificate and I shouldn't be so using revoked certificate this is really good if I'm blocked because it says you revoked a certificate that we had agreed should be revoked before very good okay I'm sorry it's long today we still have to check the new issues that are not triaged um quietly skipped incremental deployment after retry from JC okay that that need to be analyzed if no one object I'm interested into looking into this one no one object okay let me add it to the new milestone then I will add the title after then so service principle that's the one you created I will have to remove the triage there is this one more I'm not sure what to do about it is it possible if you or someone from the board takes it because it looks like it's related to the meeting project on gyra gyra and I'm not sure yeah I'm a gyra admin you're welcome to assign it to me I have no idea how to do this but I'm happy to go learn how to do it okay if you have a if you need help I don't mind assisting you it's just that doing it it is something but I think we should confirm that the proposal from Alex is okay for everyone first great any question on this one I'm adding myself as well on the new milestone permission for Jenkins securities can repo I believe that one should be we need oh no it's for us it's not Jenkins here I admin right I see the repository are inside fra ok so I'm taking it as admin ok del fix plugin bundles proprietary dependency that one should be on me as a board member you can take the triage off of it they're doing just fine they're making the right the right steps and Daniel Beck has agreed that their steps are fast enough and I think yeah to clear the triage setting and for next next milestone is good cool thanks Mark I don't see other new issues thanks every up adding to the milestone ok so so there's a there's a chance that we can actually resolve that one on the Romanian mirror next week they're responding or I think the first one to respond so it might take some time shame no actually what a great operating C'est délicieux pour 10 ans. Ce n'est pas juste amusant, Damian. Ce n'est pas juste que l'OSU-OSL a utilisé CentOS 7 pour 10 ans. Wow. Maintenant, oui, je suis heureux qu'ils en aillent maintenant. C'est une bonne chose. Absolument. C'est tout sur les nouveaux problèmes. Je vois que les procédures ont succédé. En relance, C.I. Je pense que c'est bien. C'est intéressant. Les procédures ont succédé sur la deuxième fois. Oui. Ça veut dire que l'OSU-OSL est là-bas. C'est intéressant. Ou l'URL que Cooct a offert à AirSync. C'est comme une url générique. On a une chance d'arriver. Oh, ça pourrait être. Rundropping DNS. Oui, ça pourrait être. C'est très bien. C'est tout pour moi. Est-ce qu'il y a d'autres choses que tu veux avoir ? Je ne vois pas d'autres choses maintenant. C'est bien pour tout le monde ? Oui. Je vais arrêter la screen share. Ok. Oui, nous avons déjà 5 host. On hoste la relance du week-end. Oui. On fait juste bien. Je vais arrêter la recording. Pour les gens de nous voir la semaine prochaine. Bye-bye.