 Bonjour tout le monde, bienvenue à la Team Meeting d'infrastructure de Jenkins. Nous sommes le second jour de 2024. Donc, bonne année à tous ! Aujourd'hui, nous avons moi-même dameur du portale, Stéphane Merle et Bruno Verrarten. Hervé est off, cette semaine. Je pense que Kevin n'est pas disponible, et Marc va nous rejoindre plus tard. Allons-y, on commence. Donc, la relance du week-end, 2.43. C'était réellement réellement la dernière semaine sans aucun problème. Marc et High ont travaillé sur le texte. Et il a été déployé à l'infrastructure, comme tout le monde était dans les holidays. Hey, le christmas, les holidays ! Donc, pas de problème. Cette semaine, le nouveau week-end réellement, à moins les packages et les images docuées, ont été poussées. Je pense que le change log peut être poussé demain, si Kevin n'est pas disponible aujourd'hui. Eventuellement, Marc, notre équipe, peut s'en prendre. Et l'infrastructure, Stéphane, c'est à dire que vous êtes prêts à déployer l'infrastructure sur l'infrastructure, tout à l'heure que vous voulez. C'est-à-dire que l'image a été publiée ? Oui, cool. N'oubliez pas, il y aura un grand changement de plug-in. Je vais parler de ça plus tard. Vous avez un pull-request sur les images docuées sur le plug-in de coverage. Donc, n'oubliez pas que vous devez déployer l'infrastructure sur l'infrastructure, si vous voulez, vous devez déployer mes pull-requests. Vous pouvez les déployer avant ou après les changements de la cour de week-end. Pas de problème. Mais vous devez déployer ça aussi. C'est un plug-in déployé, changé par le nouveau plug-in, en replantant. Juste le point sur les statuettes de billets. Donc, le décembre était 7,35K, donc juste un peu plus, mais juste un peu plus de dollars en décembre. Donc, nous étions assez bas sur notre goal pour 2023. Donc, félicitations, tout le monde. C'est un bon travail. Et cette semaine, pour maintenant, le forecast est encore irréliable, parce que c'est seulement deux jours, mais le forecast devrait être moins de 7K pour ce mois, grâce au travail du sponsor. Le chiffre du sponsor en itself, nous avons presque atteint le milestone de la 1er 1 000 dollars consommé. Juste pour vous donner un ordre de magnité, nous avons 40K sur le chiffre du sponsor. Donc, ce sont des crédits donné par Azure, et nous consommons celui-là. Donc, nous allons avoir 1K, presque, pour le 40K. Donc, c'est plus que ce qu'on a diminué sur Azure, bien sûr, parce que, comme un souvenir, c'est une instance non-sportée. Stéphanein High a demandé l'obgradation de Cota pour l'instance de spot aujourd'hui. Donc, nous allons passer sur le chiffre qui va nous permettre d'utiliser plus de crédits. AWS, un peu moins que les derniers mois, nous sommes sous 9K. Merci pour le monde, le monde mondial, particulièrement basé, parce que, en marchant sur les bails de bombes, et seulement pour construire les bails nécessaires, nous pouvions réduire presque 1K. Donc, merci pour ce grand chiffre, folks. La conception générale est à 0,3K, donc la même rate que sur Azure. La forecastation est à 8,2K, même si la même forecastation de 2 jours et de 31 jours du mois est incroyable. Digitalization, nous avons assez pour les prochaines 2 semaines à la rate générale. Nous avons discuté avec DigitaloCN, je vais l'émerger plus tard, mais ils devraient appliquer les 20K nouveaux crédits pour cette année, plus tard ou demain. Donc, nous devrions être bien sûrs dans cette période, ce qui veut dire que nous devrions pouvoir s'occuper de notre current workload et même augmenter des usages dans le futur. Donc, c'est une bonne news. Comme des magnitudes, nous avons consommé 38 $ pour le 1er jour de janvier. Est-ce qu'il y a des questions sur les annonces? Hop, OK. La prochaine semaine, nous verrons les autres. Donc, il n'y a pas de conciliation de la rencontre. Donc, je serai là pour le gérer. Donc, il n'y a pas besoin pour quelqu'un à prendre. Donc, je crois qu'on peut vérifier le calendrier de l'arrivée. OK, pour tout le monde. Donc, ça veut dire que nous devrions pouvoir relier le 4.0. Oh, c'est trop beaucoup. La prochaine semaine, la prochaine juillet. Je ne me souviens pas quand est-ce que la prochaine LTS comme d'habitude. Je crois qu'on devrait vérifier les websites de Jenkins. Parce que nous avons perdu. Nous devons être careful. Nous avons perdu 1er LTS pour relier le dernier mois. Donc, j'ai préféré vérifier le calendrier de l'arrivée de l'arrivée de Jenkins. C'est le candidat de relier la prochaine semaine. Ah, cool. C'est le calendrier de l'arrivée de l'arrivée. Nous sommes là. La prochaine semaine, vous avez dit le 10 janvier, le candidat de relier. Le candidat de la prochaine LTS. La prochaine fois. Je crois que la prochaine LTS sera le 24 janvier avec le 2.426.3. La prochaine LTS 2.3. Sorry, 2.426.3. 426.3. 4 janvier. Cool, merci. 24, 24. Oh, 24, sorry. Le 23, 24 est difficile pour le monde le mois de janvier, bien sûr. Parce que nous sommes en 2024, 23 et nous sommes en 24. Merci. Donc, rien pour l'infrastructure la prochaine semaine. Nous devons être careful le 24. Est-ce qu'il y a quelque chose dans l'arrivée de l'arrivée? Non. C'est bon. Donc, pas de sécurité. La prochaine événement majeur sera la première. La prochaine, le 3, 4 février, le février 2020. Les brosses. Est-ce qu'il y a d'autres événements majeurs ou des items qu'on veut discuter ou mentionner? Non. OK. Donc, on commence sur les marques issues que l'on a faite. Pas la réplique de l'arrivée de l'arrivée, j'ai besoin de l'opener. Désolé. Nous pouvons finir le mouvement de l'arrivée de l'arrivée de l'arrivée pour les nouveaux crédits sponsorisés. Comme l'on l'a mentionné avant, sur le statut de bilingue. Nous avons consommé 1K. Et si nous regardons, j'ai ajouté sur l'issue quelques screenshots sur comment nous l'ont diminué. Donc, comme vous pouvez le voir, nous ne consommons pas d'autre chose pour l'agent de l'infrasci, de l'infrasci, d'infrasci vétal de l'infrasci, de l'infrasci, de l'infrasci, et de l'infrasci. Donc, nous avons assez de la réplique. Dans le futur, nous pouvons commencer à marcher sur un nouveau issue et commencer un plan pour marcher ou créer de nouveaux clusters pour l'infrasci et l'infrasci et l'infrasci sur ce nouveau sponsorité. Mais c'est un sujet succès qui devrait arriver après l'arrivée de l'arrivée de l'arrivée de l'infrasci, que Stéphane est marchant. Mais maintenant, nous avons commencé à consommer et ensuite, nous allons détecter mois par mois ou weeks par week selon le rate de construction. Est-ce qu'il y a une question ? Nous avons deux issues qui n'ont pas été planifiés et qui n'ont pas été planifiés parce que l'utiliseur demande une question et ne répond pas quand nous leur donnons des directions. Donc, fermé. Maintenant, je vais prendre les issues sur l'ordre. Je les ai ici parce que la priorité n'est pas facile et je propose que nous spécifiez la priorité sur le nouveau milestone. Est-ce que c'est bon pour tout le monde ? Donc pour chaque item qu'on va couvrir, nous faisons un statut et nous direons si nous allons marcher sur le prochain milestone, pour savoir si nous avons deux postes. C'est bon pour tout le monde ? Nous allons commencer avec une petite question qui devrait être fermée aujourd'hui. Le but est de replacer le code de la réplique de l'application de la réplique par la réplique. Donc, ça a été fermé par Alex, merci. Et ça a été fait. J'ai fait ça sur CI, Genkins.io et checké sur Trusted Insert. Mais j'ai fermé ça la semaine dernière, mais j'ai oublié que c'est aussi sur le CI et l'infrasci. Le pull request a été fermé, comme vous pouvez le voir, sur ma screen. Donc, maintenant, depuis que Stéphane a été répliquée, au moins en infrasci, avec le nouveau update, si c'est-ce que Stéphane peut prendre soin de ça, en même temps. Donc, il y a seulement un restart pour les deux plugins encore et le single release. OK, pour vous ? Perfect. OK, j'ai assigné l'issue pour vous depuis que vous êtes le Master Task pour les updates de plugins. OK, pour vous ? Perfect. OK, j'ai remis ça. Et ça veut dire que vous pouvez ouvrir cette issue si vous arrêtez de voir ce message et si vous arrêtez de voir l'application de la plugin. On peut avoir besoin de l'installation, même si, avec Kubernetes, il y a un automatique de l'installation si la plugin n'est pas offert sur le code config. Donc, ça peut ou pas marcher, mais en plus, c'est un install et un restart. OK. Qu'est-ce qu'il y a ? OK, donc cette issue va à la prochaine milestone. Si vous avez besoin de l'aide, n'hésitez pas. La prochaine issue, CI, Genkins.io, Install Coverage, Page, Extension plugin. Je veux bouger ceci comme la dernière parce que j'ai besoin de Mark pour être là. Si il n'est pas là, on va délai une semaine. Crawler, bille fail. Merci, Alex. Alex coque que la Crawler a failli depuis 22 en décembre. La Crawler est un travail qui est le rôle d'agréer un set de metadata quand il se passe et publier le file de JSON signé si c'est sur l'environnement entré et déployé à l'application. Ensuite, chaque instant de Genkins retire ce file de JSON et n'est pas à installer ce qu'on appelle des tools de Genkins comme GDK, même installation et tout. Le travail en charge de ceci a été publié à l'application de l'application mais a fail à publier le nouveau centre de date parmi les progressés. Il a failé avec un error de 4 ou 3. Je n'ai initialement pas mis et j'ai pensé que c'était causé par les récentes de la compétence de la phase de bloc de changement de crédential que l'HRV a travaillé sur avant la fête. C'est mon mistake parce qu'il y avait une édition relative. Je n'ai pas pensé que c'était la faute parce que j'ai changé les règles de la fête de l'agent de la compétence de l'application parmi le nouveau centre de charge de la compétence de l'application. Mais avant aujourd'hui pendant le diagnos j'ai trouvé que l'essai est publié donc cela roule l'IP de l'agent. Finalement, c'était juste un simple erreur. Comme vous pouvez le voir, nous avons écrit la date expériée et c'était le jour avant le jour de la fête. Donc cela signifie que nous devons s'assurer que cette fête est fixée. Il y a aussi un erreur sur AWS que je n'ai pas diagnosé. Mais au moins, AZ Copy maintenant marche. J'ai réparti sur l'issue de l'application de l'agent qui est relativement à ce nouveau système. Et donc afin d'arriver cette issue, nous devons fixer l'issue de l'AWS3 et ajouter un événement pour la prochaine rotation. Ou une update CLI. Oui, absolument. Oui, en ce cas, c'est seulement oui, cela peut être réveillé facilement. Donc, oui, bonne idée. Vous semblez hésiter de faire une update CLI. Vous voulez essayer avec nous? Mais le jour que nous avons écrit cette date expériée, j'ai dit, nous devons mettre cela sur le calendrier ou faire une update CLI. Et je devrais faire ça. Regarde ce que je fais. Nous allons faire ça. Bien sûr, je devrais chier ma main. Ce sera un script shell. Ce sera bien. Ce sera vite. Donc, ceci est presque là, mais nous devrions travailler sur la prochaine rotation. Je me souviens correctement que nous avons fait ça deux fois. Il y a deux expériences dates en 2, 5, et 6 maintenant. C'est absolument correct. Mais oui, pour maintenant, nous avons utilisé seulement les événements calendaires. La prochaine question, si il y a une question, la prochaine question, quelques bails sur CI Genkin, sur l'EU, en utilisant CDK21, ont des erreurs de mémoire. Je n'ai pas pu comprendre où ça vient de. C'est vraiment bien. Ça semble que c'est quelque chose dans les bails. Je n'en comprends pas ces bails. Il y a quelque chose qui peut s'éteindre. Quelle sorte d'agent est utilisé et peut-être qu'un agent a plus de mémoire. Mais c'est vraiment bien. À la première place, il y a un peu plus de diagnoses. J'ai ajouté un lien. Nous devons travailler sur ça. Je ne sais pas la priorité et si c'est un bloc de roundups. Alors maintenant, je considère ce... Je suis en train de bouger le milestone, mais c'est la première priorité. Je ne suis pas sûr qu'est-ce que l'issue de l'EU a besoin d'une spécifique mémoire. J'ai besoin d'un Genkin. Les bails. La première place. La première priorité. Nous devons aider les diagnoses. Nous devons... Nous devons prendre des notes. Je vous ai besoin d'aide à prendre des notes. S'il vous plaît. Nous devons prendre le milestone à la première priorité sans l'adresse d'un argument pour faire ça plus important. Euh... On s'occupe d'un C.I.G qui a été entré. Il faut le faire pour l'infra.C.I. et le relier. La bails. Fail. La copie. Fixé. L'experimentation d'un S.A.S. Token. Pour faire un X.A.W.S. Pour faire des updates. Remindons un update. Merci. Euh... La prochaine Oh, celle-ci est fermée. Kristen qui sera la prochaine LTS Release Lead a besoin d'accès à un réel C.I.G. qui implique d'abord l'accès à la VPN dont ils ont fait et aussi la preuve d'autorisation sur l'app pour assurer l'authenticité d'authenticité sur le réel C.I.G. et d'être capable de construire et d'accès à l'authenticité. Ça a été fait. Alors j'ai fermé l'issue et je peux le faire sur les issues. Est-ce qu'il y a une question sur cette partie? OK. La prochaine est déclare de pipeline migration et de plug-in plus longs compiles. Je n'ai aucune idée sur ce sujet. Donc, let's open it et discover tout ensemble pour que l'on puisse essayer. Oh, je me souviens. OK. Et maroc volunteer pour faire les upgrés. Oh, ça devrait être fermé. Il a été merge. OK. Donc, nous pouvons l'encloser. Est-ce qu'il y a une objectif? Si l'on s'entend à l'armée ou quelque chose, s'il vous plaît, il y a un point. Donc, ce sujet est fermé. Yay! J'aime l'essence d'utiliser un défilé de défilé au matin. Oui, un défilé de défilé. D'accord, je suis un garçon. G-Git cloning, non converti et ligné sur les windows. Donc, un second contributaire confirmé que l'on voit une différence quand un agent de conteneur est utilisé, c'est-à-dire un agent ACI comme conteneur d'instance d'agent versus d'utiliser l'agent d'un machine Azure. Ce serait notre image de packer qui pourrait avoir une différence entre l'agent ACI et l'image d'un docker ou ce serait le plug-in en charge d'attendre l'agent et d'établir l'environnement dans le processus d'agent qui n'a pas encore été changé. Mais au moins, c'est la confirmation qu'il y a ce qui pourrait être fait dans l'infra-level. La dernière rencontre, Mark a dit qu'il était en train de prendre soin de celui-là. Je crois que c'est un signe pour lui. Donc, on va changer cet issue pour la prochaine milestone et on va vérifier avec Mark. La prochaine semaine, si Mark n'a pas eu le temps, on va devoir mettre ses mains dertes sur celui-là même si ça ne ressemble pas à ses priorités. Parce que, comme James a dit, c'est aussi up pour le développeur pour changer l'anlignation sur tous les files de leur repository. Donc, ce n'est pas bloqué, c'est juste pas inconvénient. On va donc passer à la prochaine milestone, encore assignée à Mark. Note, un second contributaire confirme voir la différence entre l'agent de Windows et l'agent de Windows VM. Il pourrait être lié à l'agent de setup, à l'agent de l'infra-agent de setup. Laissez-moi savoir si je capture et si c'est clair pour vous et si je capture bien ce qu'on dit. Ou le plug-in de la position de l'agent ou la jigette. Est-ce qu'il y a une question sur la jigette, cloning, non convertissant des lignes? Oh, OK. La prochaine question, remonte la sessona type du public virtual repo. Donc, elle a été confirmée par Mark directement à Gfrog que nous avons vu une decrease sur les bandes et sur l'usage. Donc, l'agent est mis. Cette question n'est pas fermée encore parce que nous voulons confirmer dans une ou deux semaines. Donc, je vais laisser Mark prendre care de la clôture une fois que c'est OK parce que c'est la personne qui a connu avec Gfrog et communiquant sur cette partie. Et aussi, j'ai encore une action sur mon côté. Je dois remettre la sessona type cache du public access juste d'être sûr que personne ne t'essaye d'y arriver directement. Est-ce qu'il y a une question? Laissez attendre Mark pour confirmer que nous continuons d'avoir une decrease sur les bandes de Gfrog et de Gfrog sur les bandes et sur l'usage. Un autre item pour Hello, Mark. Hello, Damien. Happy New Year. Happy New Year. Excuse my tardiness. No problem. You're coming right at a good time. This is Brighton that is waiting for you. All right, good. OK, what am I? What do I need to be doing? Just to confirm on the removing G-center and sonit type repositories from Artifactory. I believe we were able to see an improvements one or two weeks ago based on the logs you retrieved from Gfrog. Yes. I still have to remove that's one less minor configuration item. I need to remove sonotype cache from public access. That's a minor item. And then that means you will be the master of closing or keeping it open until we are sure we have a bandwidth decrease. And I think we can close it. We did have one very nice outcome of this that a Jenkins contributor detected a problem library that we had placed in orphans and proposed a change to Jenkins core to upgrade to a newer version so it's now provided by Maven Central. And I was frightened to touch that particular library because it's the Java implementation of OpenBSD Bcrypt. And cryptography is not a thing I touch willingly, right? I think experts should do that. But in this case they did the analysis and multiple people did the analysis and saw that there were no Java changes, no Java changes between the version we use originally ancient thing from 2016 and the release that is available in Maven Central from 2021. So big positive somebody several people did the analysis and we're good to go. It'll be it is already in the two dot four hundred thirty nine weekly. And thus we'll be in the next LTS baseline in February. Oh, yes, good. 24 of January, the next LTS. No baseline. Right, right. Different phrasing, right? So it we're not we I don't think it's so urgent that we need to backport it because truly there is no Java change whatsoever. It's no import change nothing at all changed in the Java code. It's it was purely a change to build scripts. OK. So that mean I can close the issue. May I ask you just let me close it? Let me make the comment and close it because I think it's I started it. I opened it. It's it's very reasonable that I should be the one who's responsible. The definition of that of definition of done also includes my last cleanup. So we need to both of us comment what was done. So you have to confirm and the last one will close. Is that OK for you? Yes, that sounds great. Thanks. Is there any other question on the dissenter part? None for me. Let's move to the next issue. My greeting leftovers of public gates to RM 64. So we have a nice exhaustive list. I believe we weren't able to walk on. I was listening off to be on holidays. Yes, exactly. Damn people on holidays at Christmas and New Year's Eve. Which means nothing to report here. Stefan, do you feel you should be you are OK to start walking on this next week? Yes, what do you think? Personally. Two weeks ago, no, three weeks ago, we said, oh, eventually, we could share the burden. I didn't have time. But if it's OK for you, we can just spread the all the services between you and I, if it's OK for you. Each of them need need migrations. So. So they will all need some work to be done. We need to convert the. I don't say what we have to do. I say, are you OK? Yeah, we can. We can share the burden. Of course, exactly. Whatever you want. You take one, I take one. That's what I meant. No, no, no, I take one. You take two, of course. Yes, yes, of course. Hum. But if I take two, you have to write the updates CLI for changing expiry expiry dates of the credentials. Ah, that would be my pleasure. Bruno will help me out. Bruno, run for your life. Nothing to report. Workloads to be spread across team members along the way. So we keep that issue on the next milestone. Hum. Export download mirror list to textual representation. Can you do you remind where were we on this one, Stephen? Yeah, we just have the skeleton. We need now. I need now. We need to improve the content of what we put in the now. We need to convert to a JSON and not a text file and to improve the content and do a little script to dig on the URL to get those IPs. OK, so next milestone, but low priority. Is that OK for you? Yeah, I'm not sure. We'll manage to take my hand in there if it's low priority. OK, but the skeleton is done. We have textual representation. Need to convert to JSON. Low priority. Next issue is on me. I thought I should have been able to deploy this to production last week because I would have wanted to take the opportunity of not a lot of workload, even though I didn't have time to finish it. So that's something that should be planned. Ideally, I would want to do this next Thursday this week if it's OK for everyone, which involves eventually slow download mirror during at least one to four hours after the operation. Time for the mirror for all being scanned and the leftover of the former service to be cleaned up. And of course, if I make a mistake during the migration, that could result on get Jenkins IO being available or responding errors to download. Is there any objection if I plan this for Thursday morning, my time? No objections from me. No never. Thursday morning. 4 janvier 2024. At Paris time. Yes, I will add it on status Jenkins IO. To not pull size, that issue is almost done as mentioned before Christmas after a tour full review from Stefan on high. We need to start packing up some services running on the Intel not pool on the public cluster in order to do so based on the metrics we saw we could the nominal usage of on memory and CPU resources is clearly lower than the resource limit we currently use. So that's the initial rule of thumb when you want a linear scaling system. In that case, we can start packing up which means decreasing the resource requests so that the Kubernetes scheduler should be able to pack more services on less nodes and so we will decrease the cost of these nodes because right now we have five small nodes and we should be able to run everything on three nodes. That's the goal of that wall issue to decrease the cost of the nodes on that cluster. The limit will be kept at the same because we we need to put a upper bound that will kill the container if they use way more than usual. This should be quite easy but I didn't have time during the holidays, of course. So same nothing to report. To do tomorrow. Wednesday. Wednesday. 13h. Any objection? Because I will need to open a status for the services impacted by that change. OK, Stefan can I let you report on infrasciay Jen Kinseyon using RM64? Yes. Can you open the issue for me too? Yes. Because I did comment this morning. With you, by the way. And yes, I'm in order to use those I am not pool. We choose to add the tools we need on the only one image of the agent. So I installed the tools that we use for Helm and Elphile to the the new tag 144.0. It's been published this morning. I need to put that in production. It's not yet in production. It's just available. And then we will be able to start changing the pipelines to start using that new image on RM64. And we did list all the repository we are using right now with Terraform to use those new tools on that new version of the agent. So it's work in progress. Is there any question? OK, so we don't know in advance we haven't listed exhaustively all the jobs that are using Intel agents yet. But these two are the two biggest that we use. So let's see. Oh, it behave if we're able to use RM64 for that, because maybe we are not. And once these two are done, I will let you, Stefan, continue tracking other container builds that will use other non RM64 images. Yes. OK. So. Docker and file ready to be replaced by all in one. Kubernetes management is the next target for RM64 agents. Then all Terraform agents. Involved shared. Library change. OK. Thanks for the report. Any question? OK, so next one renewing Digital Ocean sponsorship for 2024. So as thanks for the work of AirV last month, we were able to have our sponsorship renewed by Digital Ocean. They were waiting for January to start. We contacted them earlier today just to be sure we don't run out of credits. So as mentioned earlier in this meeting, we still have almost two weeks of credits left today. And they answer promptly saying, oh, we are going to apply it today or eventually tomorrow. And they proposed even 20K instead of 9, 18K. But that mean we won't be able to ask for additional credit until the end of the year, of course, because in fact, we consume 18K and 8.4 additional K last month, last year. So we did a bit more than expected and they will want to stay at 20K max for open source project, which makes sense. That's already a lot that cover way more than what we need today because these additional credits were due to issues and CI Jenkins that consume almost 8K in two months that were unexpected. Now we control this. We are following this carefully so that should not happen anymore. So that's that's a good news for us. As a reminder, that will sustain archive Jenkins, I owe a bandwidth downloads and machine. So that's really, really good news. So thanks R.V. and all our thanks to Digital Ocean. I believe we can start thinking about the block post one to thank them or something do something eventually because that's really helpful. Renewed our sponsorship for 2024. 20K credits should be applied today or tomorrow to our account. We have two weeks of credits left until then. So we will close this issue once the credit will be applied on the account. Is that OK for the definition of them? Cool. Next issue for Stefan Ghost version tracking. Can you give us a heads up on this one? Yes, I did the window spot before the holidays and I need now to do the merging, the factoring between Linux and Windows for the common stuff and re-walk all the DTLI matching those ghosts. Those ghosts, sorry. Common tests. Check all dependencies. That's a back task. Low priority. Cool. Thanks. Is there any question? Thanks for your help. No problem. Next step, replacing BlobExphere by azcopy. So the common line we use on multiple areas on the update center crawler and other tools use BlobExphere of old Python command line to copy data to the file storage using storage account and object storage on Azure. But that tool hasn't been updated since 2021. So it's time for us to switch from that tool to a new one named azcopy and recommended by Microsoft. The az command line does not allow to copy objects as we would want and the azcopy. The azcopy has been used extensively by Hervé when he shaped the proof of concept for the new update center. And it looks like it's working very well. That's also used by the crawler as we mentioned earlier today. That means we should be able to use it everywhere. The status right now is that Hervé is currently fighting the SAS token. The goal is to have a secure setup where we can revoke the token at any moment whatever method and eventually have expiry date. We already have one crawler so that should be easy but Hervé was trying to to have a fine tuned grain and really tuned system. Looks like we can use storage accounts and then in the future eventually use identity management meaning the virtual machine don't need even a token. I have mixed feeling about that kind of setup though. That's a different story because if there is a takeover somewhere on Azure system then that means we are doomed. So we need both of them. Yeah, the token is dangerous because we need to rotate it so at the moment we need a system where it's put in clear for a few minutes until we have applied it. That's the danger so if there is a takeover of the machine of one of the administrator that can be dangerous. But in the case of the identity management that means any other job running on same virtual machine will be able to takeover. So for some jobs it's okay such as the crawler because the crawler only use ephemeral agents but for the update center that can be a problem because the update center use a permanent agent. So that mean if you have access to any of the jobs of trusted CI you could get the access. Maybe it's affordable maybe not, I don't know but that's something to be to take care. So right now nothing to report. I remember that Avery did some stuff before the early days but I don't remember which one. It's there is no command so I propose that we put this as nothing to report and move it to next milestone and Avery will take care of that when he will be back. Is that okay for you? Initially we discussed about me taking over here but since there we have the crawler issues and since the crawler issues are related on the ACS token that Avery is currently digging my proposal is that I focus on fixing the crawler and adding restriction on the crawler itself so that Avery could cherry pick the knowledge and I play it here. Any objection? Okay. Nothing to report early days Moving to two milestones early days for Avery Currently focusing on fixing and restricting Is there any question on the ACS port? So knowledge can be cherry picked in this topic. Okay. So then moving to the update center the new update center system. A few things to update. So as a reminder we still need to battle test this from the point of your performances so performance test to be run. So we need to spin up an injector system that will inject requests at the high loads on different loads to see when does the system break. Second, we need to restrict the token used by trusted CI for the ACS just what I mentioned earlier. And finally, we need to wait for a review from the Genkin security team on the date center pull request by Avery. Minor update as part of the crawler. I saw that there were two credentials with the same value. One was the full URL while the other was only the query string. The query string is currently used by AirVespool request and the crawler. And I think the URL based on what I see on the issue the URL token it wasn't required anymore. It was too much painful to be used and Avery worked a lot on only using the query string. So I've removed the credential to be sure we don't we don't do any human error since it's all managed manually for now. And I've updated the date center job to use the new one because we didn't test the date center since three months that could explain the presence of that credential. The date center did not fail after that change. So we are all OK there. That will ease AirVespool's life when it will be back. So for now, we won't be able to work on this one because we need AirVespool and fixing the crawler. So if no one object I propose to move in two milestones. Is that OK for everyone? Except minor credential cleanup. OK. Marc, I move this one as the last one. You install it. It's a request to install the new coverage pages extension plugin on CI Jenkins IO. I'm sure if we search we might see a rotting issue or a closed issue in our help desk about removing the embeddable build status on CI Jenkins IO at least one year ago. Because we were wondering if it wasn't causing performance issue an outbound bound with download from CI Jenkins IO. OK. Let me search for embeddable. Maybe it's closed. I'm searching for this one. Consider removing embeddable build status plugin. So that issue was initially opened by Daniel. The reason was the plugin wasn't actively maintained. So that was unsustainable. We removed it from other instances but for CI Jenkins IO if I remember correctly we were breaking things. Then the plugin was updated despite in order to fix the CVs. OK. Now someone in that room and they're in pop. OK. So Mark you and they're in you adopted the plugin right. We did and I made a what I thought was a relatively significant performance fix as an early adoption change. Cool. So there was there was some particular code that I looked at and was horrified. And when I'm horrified by code that's a really bad sign. It's like oh my oh. What is that doing? And so that horror has been resolved and released in the in the plug-in. I don't know that CI Jenkins that I would ever have encountered that horror. It was just a terror for me. It was. Whoa. How could this code possibly have considered as acceptable and something as access is as frequently as embeddable build status. OK. So which mean I wanted your advice and security advice on that topic. And I think I think security is still the right thing to ask because I may have missed something. I think the embeddable build status plug-in is OK now to stay on CI Jenkins unless we've got data that says it's causing an undue load that that part I can't answer. Right. I've never done a data analysis to see see if it's if it's causing undue outbound bandwidth. I started checking the code even though I won't have you high on checking performance issue at least the code is simple enough that if we if we do a threat dump if we see CI Jenkins are you being slow we do a threat dump we will be able to immediately identify if there is some code stuck on that plugin that one feels easy for me. So I don't mind the performance but I just wanted to be double sure because my recollection was the embeddable status plug-in should still be removed but looks like it's closed. So I'm should we proceed and ask asynchronously the GenSec team for care torf review. I think we should I I I think it's good to ask GenSec before we deploy a new plugin to see out at Jenkins that I would just that's a really good pattern for us. There they're a good safeguard. OK, so I will command the issue and I will mention people it's just that I know the GenSec team is always busy and I will prefer having them checking on the update center than this one because the risk in term of security for CI Jenkins you are not that much. We don't have a lot of credentials that could be taken over here. And that's a fair point. We might we may next week decide we're going to stop waiting for GenSec. We've we've assessed it ourselves. I think that's OK if we give them a fixed period and say if they haven't been able to to review it we accept that and the code is simple. Right. This is this is really not a very involved plugin. That's why if it's OK for you, I propose since that issue is closed I will mention it's closed because it has been adopted because I remember un variable build status at security issue. That's all I remember. And now we just say so together that you adopted. So for me it remove what was a blocker for me. So if it's OK for everyone, I schedule that issue to the milestone. We proceed for installing that because we have a contributor who looks really careful and is helping us. So I would prefer taking the risk and see a Jenkins say given it's a low risk and now due to what we say it we still mention the Jenkins security team say a for information we have installed that we explain how we adjusted that risk and we see if you have time or a point of view on this please look at it and if you don't answer no problem do you think it's OK? Do you think that's a proper? I think that's very reasonable. Absolutely. It's the adding the coverage plugin is not significantly increasing the threat, the risk profile of CI Jenkins.io, right? I think you've described it very well. It's if embeddable build status has some un resolve performance problem it still has that whether we add coverage or not. OK, so risk versus benefits still pinging async low priority the Jenkins security for the sake of sharing. OK. Is that OK for everyone? Cool. So 96. OK. We still have a lot of issues. I'm sorry. That was holidays. I've put some issues on the next milestone and we need to check them. I will order priority based on what we said. First one. The priority, the DNS domain name expires in 25 days. Mock. We can close it. Oh, it has been renewed. It has been renewed. Just close it. And that was the scenario we envisioned, right? I think when it hits 28 days Tyler's renewal happens and so confirm that it is renewed. OK. May I ask you to close it with a message showing how do you check for this that will be easier for next time? Mm hmm. Cool. I'm still keeping it on the next milestone and it will be closed immediately. Yes. Um, thanks Mark. We have a request by Alex about approving Elementio GitHub integration for the repository permission of data. Ah, that's a good idea. Useful. I'm not sure about the permission that it gives, so that need to be read carefully because I understand the needs, but the if you see on my screen, review requests. So it's only one repository. That's the good thing. But it can read access. OK. And read and write access to actions, issues and pull request. So it's really OK. So that should be OK. It should not be able to change code. So that one does it looks good for everyone because repository permission of data as a reminder. Once a pull request on the code on the main branch is merged, that will effectively grant access to repositories and plug-in. So that's a sensitive repository. Even though I was scared this morning because I saw the right, but in fact it's only action issue on pull request. So it's for the bot to have the bot opening pull request. So that doesn't remove any human validation. So that looks really good. So that one is removed and I'm going to approve it. OK. Any question on that one? OK. Then add not my fault to Jen Kinsa admin group. So he need administrative access to CI Jen Kinsa. Which is OK. Just to be sure, I need to check the permission granted by Jen Kinsa admin group. I know what I saw when I granted access to Christian to release CI, I realized that it's a math for this group. So maybe we need to plan and carefully check. So the goal here is to have Alex only having administrative access to CI Jen Kinsa. It cannot be admin or top level admin only admin on CI Jen Kinsa. That's really important not because we don't trust Alex but it's just to avoid having too much permission to everyone. Any question on this one? My proposal Stefan is that we will have to pair on this one just to see the state of the non-art and see what could be done and where are the real configuration so we can see the source of proof for this and we can update that issue as soon as possible and then start planning for cleaning up things. Is that OK for you? Oui, je me souviens qu'on a déjà dénoncé quelques mois plus tard. C'était pas facile. Cool. Donc, plus de triage. Un nouveau issue est ouvert suivant Daniel's message. Il semble qu'il a un problème pour downloader le data sur Uplink. Je pouvais reproduire l'issue et voir un nouveau erreur. Donc, une survey de thanks pour l'entraînement qui m'a aidé à filtrer les erreurs. Donc, ceci est l'erreur mais mon connaissance s'arrête ici même si nous avons le PgToast mentionné, ce qui me rend que Daniel est correct. Il peut y avoir un problème avec le database mais je ne sais qui. Donc, nous devons vérifier le database sur les sites Azure et éventuellement, le code. On n'a pas d'idée d'avoir Uplink et JavaScript donc ça pourrait être une longue. Si quelqu'un avec cette connaissance est venu m'a aidé. Je suis en train d'évaluer la priorité de ce plan. C'est vraiment... Oui, je n'ai pas de priorité. Je sais, Daniel, ce n'est pas confortable de dire que c'est une priorité mais je ne pense pas que c'est assez valable pour le projet Jenkins comme notre travail sur les updates sur Jenkins.IO migration. Donc, pour moi, j'espère que c'est différent mais la réalité est que même si nous avons perdu le data télémetrique et que nous devons rédouer tout à l'heure, ce serait encore moins valable que les saveurs de costs que nous tentons d'achever avec updates.jancons.io. Ok, merci pour le point. Donc, la priorité de ce plan est de ce point. Oui, je suis ouvert à d'autres qui me disent que je suis d'accord mais je pense que dans le projet Jenkins, le data télémetrique n'est juste pas assez valable mais c'est juste de sauver les costs sur les updates que jancons.io hostent. Ok, mais je pense que ça fait des choses et que c'est clair pour moi. Donc, je suis d'accord. Qu'est-ce qu'on a? Qu'est-ce qu'on a découvert? Qu'est-ce qu'on a découvert? Qu'est-ce qu'on a découvert? C'est jancons.io install. Donc, je dois commenter mais ça veut dire qu'on va retirer le message de travail. Qu'est-ce qu'on a de nouvelles issues? Mes premières questions. Qu'est-ce qu'on a? Ok. Donc, ceci doit être vérifié. Je crois qu'ils n'ont pas d'answer. Mais encore une fois, nous n'avons pas d'answer. Donc, on doit vérifier. Eventuellement, ce sera un message. L'email ressemble. Sur cette question, j'ai fait une check initiale et j'ai réalisé qu'il y a une technique que j'ai utilisé et que je ne l'utilise pas. Et je ne suis pas sûr de savoir comment l'utiliser. Donc, j'ai vu vous dans le passé des utilisateurs que je vois que vous avez créé cet account et que c'est vraiment là et qu'il matche ce nom d'utilisation et les matchs d'email. Mais vous avez pu les dire. Mais nous vous envoyons un message d'email et je ne peux pas checker ça. Est-ce que ça aurait été ok si j'ai juste fait un reset password pour l'utilisation de moi-même? Ou... Dans ce cas, ce n'est pas possible parce que, ça dépend. Nous devons vérifier l'horreur sur Datadog. Tout le monde ici peut le faire. Sur Datadog, nous cherchons sur l'issue de log. Nous pouvons le faire après la rencontre quand nous arrêterons les records. Je peux vous montrer le point. Et le but, c'est de checker l'horreur sur l'application d'account donc, quand la personne l'a essayé. Ah, ok. Most of the time they report the email could not be sent. Or, no error at all which means the error is on the SMTP server. Then we have to move and remember it's mailgun or whatever we use here. We can log in to the API and start checking the errors and the errors are clearly shown as ok, the SMTP failed, the email doesn't exist, the email server receiving, fail or whatever. Does it make sense? Yes. So that one needs to be to be checked. So removing triage here and adding it to the milestone. Ok, now we weren't sure about the priority of that one. That way we still have the triage. Because right now I've added elements we saw the error the java eb space but they have no idea how it works now. We might need help from someone with knowledge on java. I'm not really sure we can check the metrics that might be something on the infraside but it's not easy to understand what the problem could be. I'm not sure if it was a container or virtual machine maybe we can check the name looks like a container but for me java eb space is word in the context of a container we need to see if there was a om key because eb space means you are trying to allocate way more memory than expected isn't that correct? I don't know the interactions but I think the current behavior is we work around it by pressing retry so it's it still happens I saw it within the last 48 hours but it's an easy thing to resolve or it's an easy thing to work around it's not free we do another run and the next run often times passes it's funny because it should have this exact same amount of memory that's what makes this one at least for me odd and surprising really? I did the same thing and this time it didn't get an out of memory or it didn't get a it didn't give this message if I remember correctly the java eb is set at the launch of java and you set the property so it's not changing with and I assume this is an om kill somehow right this is a message coming from an out of memory kill but again I don't know I agree with Damien since it's Maven reporting the java eb space that could be a child process spawned by Maven the child process receive an exit code 139 I'm not completely sure because the wall agent process and it's child should be killed the whole container should be killed by the om error because the kernel will take the whole container and will kill it globally and it will restart so we need to check if it's om killed if not then check the memory memory usage metrics check if gdk21 has word behavior inside container limit request check which cluster was used is there a failing correlation that could be for instance all failure are on digital ocean all failure are on Amazon to pinpoint something with address or do that could be helpful because that if they are all running in the same that and they are on error that can drive us to that but that doesn't make it sure yes but since we can yeah we got the name we got all the data exactly we should be able using data doc to pinpoint one or the other cluster thank you and also try to run on a VM agent instead of container to see if it disappears because the virtual machines have way more memory yes but if it work on a virtual machine that means there is something related to the memory allocation with the container if it fails so there is a problem with memory yes but you don't know if you don't have the same on a virtual machine so then you can pinpoint on gdk21 or the maven configuration used to spin up Java processes while there we don't know maybe it's only the container setup you're right I can pinpoint a problem ok so it's not a blocker but that's really knowing is my understanding and statement correct Marc? yes not a blocker so I believe we've covered all the new issues we have new things here dnsgcenter yep everything is covered on my side do you have other the the removing of the plugin will not be as easy as you thought we got a message from I forgot who and there is a dependency with the old plugin ok really? I thought I had no trouble removing it I don't where do you see message in one of the pull requests oh well we should get rid of cobertura as well so that's a good thing do we need cobertura nintra? no I didn't measure already on intra for the weekly and I saw that after so if we got cobertura we will see the failure and I have the same question for release ci do we need code coverage API? I really like having the same set of plugins and the coverage is the modern thing I like that a bunch I've been as a Java developer really pleased with this particular thing on a recent change I made to Jenkins Core it mattered a bunch this facility was available the coverage reporting does such a good job in Jenkins it's really cool ok but now do we need it on release ci maybe not for me on ci.jankins I definitely need it on ci there is no problem on that area that's work because in ci Jenkins I didn't see it reinstalled after restart well and I think it's not installed at all cobertura? I don't think so I don't know why it would be it's not yeah so it's not on ci.jankins.io ok so that's why it's working nice ok let's remove cobertura then everywhere since I don't see why we wouldn't need it yeah and Uli Hoffner regularly asked that question he notes that look the other plugins already provide now provide the features that cobertura plugin provided and they actually do a better job of it warnings ng now handles handles cobertura very nicely so if it's ok for everyone let's remove cobertura as part of this pull request stefan you are allowed to push on my pull request of course sorry I did merge already your pull request on the other no problem second sub second is that ok for you and I will take care of reporting on the main issue is that ok so we share the burden cool and then we'll see as pointed out by Alex once we will deploy the new image is without cobertura and with the new coverage plugin instead of code coverage we will see if we still have the code coverage plugin installed so we might need to do manual operation is that ok for you yes we have to check yes ok while we are talking about this coverage item on ci.jankens.io there is old data from the previous coverage plugin that is being discarded if I press the discard old data button so manage old data I see no reason for us to keep that old data it probably would have been better if there was ok so I've clicked the discard un readable data button it's going to take a while because there's a lot of it yep thanks Mark ok ok I don't have any other actions is there something else you want to mention or discuss ok no so then let's get back to our normal walking day hope everyone is doing ok see you next week I'm stopping recording of my screen bye bye