 Nous allons commencer l'recording. Bonjour à tous, et bienvenue à l'infrastructure de GeneKins Weekly Team Meeting. C'était le 13 décembre 2022. Nous avons moi-même, Demandu Portal, Bruno Verrarten, Stéphane Merle, Hervé Le Meurre, et Marc Waite autour de la table. Nous allons commencer avec l'annoncement. Donc, le week-end 2.382 est au-delà des assets techniques, comme d'habitude, le reste de la checklist va être créé un peu plus tard aujourd'hui. C'est le week-end 2.382. Oh, wow, flash. Merci. Bonjour, Kevin. Donc, ce week-end est maintenant en termes de package, file et image. J'assume que le change log est réveillé, si ce n'est pas déjà fait. Donc, ça ressemble bien. «Reviewed, approved, updated, approved, and queued for merge, thanks to Kevin's work. » Merci, Kevin. Donc, oui. Non, rien pour l'infrastructure de GeneKins Weekly Team, comme Stéphane dit, notre instance est maintenant répliquée avec cette nouvelle version pour infra.ci. Donc, on verra si ça fonctionne ou si ça ne marche pas. Le next annoncement, c'est juste un souvenir qu'il y a entre Christmas et New Year's Eve. Personne ne sera dans l'arrière de l'artiste européen et de l'ouesternes pays dans le monde. Donc, let's assume que nous ne rassurons pas la rencontre du 26 décembre. Depuis que c'est été délégué du calendrier de GeneKins, c'est une bonne assumption. J'aime ça. Nous serons consistants avec le calendrier. J'ai rencontré un annoncement ce samedi, 19 décembre, repos.genkinci.org, l'instance de l'artif Gfrog, sponsorisé par Gfrog, pour nous, sera en bas, ce qui signifie que tout le bâtiment de GeneKins sera en bas, et personne ne sera capable de construire GeneKins dans le monde ou une GeneKins Relies ou d'autres compagnons durant le temps en bas. Je pense que ce que vous allez faire c'est qu'il y a un site ci.jankins.io et ses amis pour être en mode sainte-définition ou juste ci.jankins.io juste ci.jankins.io Et le sainte-définition sera en bas, et tout le monde sera en bas, et le sainte-définition sera en bas. Peut-être qu'il pourrait être smart pour mettre le sainte-définition dans le sainte-définition, ou pour le sainte-définition, juste pour être sûr que si nous commençons à avoir des artefacts rares que nous n'avons pas créé un centre d'updates ou des bâtiments rares, que ce soit le centre d'updates. Donc je pense que c'est une question croyant à Daniel parce qu'il y avait un temps sur les choses sur le centre d'updates qui nous a causé pour faire sure que c'était très, très souvent. Et en ce cas, ça pourrait être en 6 heures. Et la question à Daniel c'est, qu'est-ce que l'impact si c'est en 6 heures. Est-ce que nous avons l'impact sur les contrôles de Jenkins mondiales, etc. Alors, à l'inverse, nous devons avoir aucun impact sur le contrôle de Jenkins parce que le file de centre d'updates est là. Il sera juste âgé de toute façon. La dernière fois qu'on a l'issue générale avec le centre d'updates qui s'est réclosé pour 20 heures. Donc les gens qui n'aimaient que nous, et les maintenants de plugins qui veulent être réclosés, pour être réclosés. Donc pour le deuxième groupe de gens, ils devraient être réclosés dans les mails. Nous avons un blog poste donc je ne suis pas sûr que nous pouvons les contacter. Eventuellement, peut-être aux communautés de Jenkins, mais oui. Et pour nous, il sera payé un objectif qui est de monitoriser l'âge de l'âge de l'âge de l'âge et de faire la transition sur le site d'alimentation qui sera plus de 6 heures. Mais si c'est correct, on va aux sponsons avec Daniel. Vous avez de bonnes points historiques qui disent que nous ne espérons ni aux Daniels parce que nous avons ons les outages. Dans le passé, ce n'est pas du ban d'aribes sur les agents de Jenkins. D'accord. J'apprécie bien, il y a même un processus pour suivre, pour falloir le tirer. Si c'est trop tard. En automatique, il y a quelques heures, et si c'est plus que ça, vous avez un processus manuel pour suivre que nous avons quelque part. Absolument. Je n'ai pas oublié ceci, mais oui. Donc oui, ça sera clairement OK. Est-ce qu'il y a une question en announcement sur cette opération ? Je n'ai juste une note. Je m'en souviens juste que je n'ai pas soumetté un tweet pour le blog J-Frog, donc je vais en fait soumettre ça à l'application d'une chaîne d'outreach, pour que nous puissions avoir un tweet pour le blog, aussi. C'est cool. Merci beaucoup pour ce souvenir. Et on devrait probablement tweeter ça de nouveau, ou on devrait envoyer un autre annoncement Friday. Et puis, si quelqu'un est réveillé à l'époque où ils ont commencé, il pourrait demander une chose similaire sur le sunday. Je ne suis pas sûr que je serai réveillé à l'époque où ils ont commencé. Est-ce qu'on a un temps état, Damien, ou est-ce que ce n'est pas... Non, je n'ai pas eu d'answer. Donc, on va essayer de les envoyer aujourd'hui. Nous ne savons pas ce qui sera le temps de l'économie, donc j'assume tout le temps. Oui, ça pourrait être une surprise, on verra. Merci, Polkz. Ce n'est pas qu'il y a un calendarié. Donc, le suivant, la relance de week-end, comme d'habitude, la semaine prochaine. Donc, ce sera le 20 décembre 2022. La prochaine relance de week-end, c'est ce plan pour le 11 janvier. Je ne suis pas sûr d'avoir une relance de sécurité. La prochaine grande relance, c'est le dévox. Le dévox, c'est ce que je peux dire. Nous avons aussi un décembre 20 décembre, pour le projet Jenkins sur le code Google Summary. Et j'ai une question open, j'assume qu'on va juste permettre la relance de week-end sur le décembre... 26... ...le 7 décembre, pour procéder, parce que le décembre est plus hassle que la relance de week-end. Mais est-ce que c'est bien avec tout le monde? Je vais probablement vérifier, donc je ne me sens pas mal si... si on ne le fait pas. Oui, ça doit être... ça sera seulement la relance de week-end un peu plus tard, mais oui, ça devrait être bien. On va... On va... On ne va pas tenter d'essayer la relance de week-end, comme l'année dernière. Ça va être malade. OK. Un mot sur le travail qui a été fait, donc on a fermé l'issue 1, qui me semble une attente d'obus parce que la personne a créé leur compte deux jours auparavant, un ou deux jours auparavant, n'a jamais fait n'importe quoi sur GitHub. Mais l'email n'est absolument pas le correct. Donc il n'y a pas d'email avec celui-là et ce compte n'est pas lié à cette personne en aucun cas. J'ai demandé la question et fermé après 21 heures juste pour mettre un peu de pression sur le requester. Peut-être que ça va, mais oui. C'est un mot, donc j'ai pris sur moi pour le fermer, ce n'est pas plané. Il pourrait être un erreur, si c'est le cas, je suis désolé, c'est en avance, c'est... ça me给ande. Javan 19 support, merci Stéphane. Javan 19 est généralement disponible grâce à tous les développeurs sur CI, Genk & Seiyou et tous les machines de la structure utilisées par les agents de construction. C'est un souvenir que son Chidika non est ETF. Je pense que c'est très facile six weeks. We'll talk about that a bit later, there is a new subject on that topic. But right now it's available, so thanks, Stefan, for that work, Christmas gift in advance. Remert movement, rename of Jenkins Infragovernance meeting, so that's the website, meetings.jenkinsia.org. There has been some misunderstandings, so now it's back to its original state. Juste note for everyone, when GitHub page refuse to use a custom domain, you have to validate the custom domain by putting DNS records generated by GitHub. But please be aware that when you validate a domain in GitHub, there are three locations where you can validate one and it's easy to do the mistake. So thanks to GitHub support for helping me on this one. You can go to your account and there is a security area with verified domain, but only for your account and the repository of your accounts and your folks. You can go to your organization security settings and verify domain, but that one doesn't work for GitHub pages. You need to go to your organization settings, pages section, subsection, validate domain. And here you have to verify the domain with DNS. Took me some time to understand that and the tickets to the GitHub support, but now it's back online. So the repository has been renamed. Thanks for the proposal of RV, based on the misunderstanding of the discussion that we propose to make it explicit that it's an archive. And now for the, there is another issue open, we'll talk about that. The ball is in the board members area. So they decide if they want a new repository, this one, but yeah. Discussion need to happen before we act. Server 500. The old one trying to sign up before you go on from that one. It was discussed in board meeting on Monday and decided that we would continue the discussion in the GitHub issue to be sure. We understand we have all the right context. I didn't bring enough context to the board meeting to have a good discussion there, but the board members that were in attendance, me, Uli, Alex. Yeah, the three of us agreed, we would continue the discussion there. Okay. Do you want me to attend the next board meeting to explain a bit? Okay. No, no, no. We should be, we should be more than capable of being involved in a GitHub discussion without requiring that you attend a meeting in order to do that. Oh, that's not the meaning. It's more if it can avoid good taking too much time and having a summary. That's very kind of you. But I think our next meeting will be in January and therefore if we can't resolve this question asynchronously over the course of the next four weeks, we should sort of be ashamed of ourselves. OK. So just just to a reminder, given how it happened, there will be a vote of all boards member and the validation on the following weekly team before acting. That's and that is just fine. Yeah, just to be sure that no one act too fast. Right. Not not a problem, but this I think we are a few who did not evaluate correctly the sensitivity of that topic. We don't have guests. So I prefer having a long discussion, even if it takes time and then it will be OK for everyone. Consensus is key for that specific topic. Thanks very much. Agreed. Accounts. Jane Kinsayo does not have HTTP 500 errors. Thanks, Tim, for that part. That was also the opportunity to archive another repository and the component inside our system. The error was due to a former component in charge of synchronizing LDAP with Jira. But since Jira is on the CDF six, at least two years, it's automatically done on Jira site, so it wasn't used anymore. So thanks, Tim, for the help on the cleanup there. We had an issue with Windows Virtual Machine for CIG and Kinsayo agent failing. That has been fixed. Thanks, Stefan, for the help on that one. We had a short term and a long term fix. It was due to templating. Here, just a note for the person following on the platform message meeting, the Windows container agents for inbound or for inbound agents have a different, slightly different behavior than the rest because of the way PowerShell and the entry point script are parsing the arguments. It sounds like it's a tricky topic because of shell and interpolating parameters. But what bother me as an administrator and users is that the behavior is different between virtual machines and the container. So I'm not sure how the virtual machine or handling that part could be interesting to mention that. I'm also maintainer of the inbound agent so I can take part, but important to share the topic. Various plugins download fail with HTTP response. So we have found an issue on mirror bits and it's not really an issue because there is no documentation but it seems like to be a normal behavior based on reading the code. When you remove a mirror, there is a short outage of a few seconds that can extend to one minute. The time that the ready system propagates to all replicas inside a cluster. That's what happened when R.V. cleaned up former mirrors. We didn't know. So it generated HTTP 500 for some people downloading plugins. That mean we might need to have to write the runbooks. Next time we have to delete. It's not the case when you update or create but when you have to remove a mirror from the list. First, the first thing will be to scale down to only one replica, the cluster. Apply the change and scale up again. Just to be sure that's everything that even if there is an issue, it doesn't take so much time. It's eventually consistent. Métiquer a couple of security warning that one wasn't for infrastructure themselves. That was more a track for the data center manifest that has been done. We have renewed the certificate for repo.genkincia.org prior to the migration. Thanks Kosukei who immediately provided a certificate for one year. So that one is OK and GFrog is aware. Not able to fork genkin's repository. So we were just waiting for confirmation by the contributor, which they did. HPI file in Artifactory. So it's a brand new plugin. And the brand new plugin. Add issues, multiple issues. So some part of the documentation where weren't there were some holes in the plugin developer documentation, as I understand, in the area of Gradle because they use Gradle for building. So we raise the topic. Their initial release has been is out. They need to Artifactory should move to Maven in order to benefit from automatic documentation on plugin.genkincia.org. And there has been an issue on the Javadoc due to that plugin. You haven't an open issue on a few minutes. Clean up resource of Prod Confluence. That was that was a bit of money. So thanks for that work. So we have set up a dump of the database which is inside the Azure bucket. That dump is encrypted with the GPG key of a few person here. And it's inside an encrypted bucket which is private. We have different layer of security. The reason is because Confluence had the security section. So that's quite a sensitive data. And the resources have been removed from Azure so that should be a few hundred of dollars per month. So it's so that should be visible on the next bill. I don't remember the last issue. I think it was absolutely off topic. Yes, it was off topic. No question on the work done. Just a note from Mervé. Good thank you. Renew. I'm adding back on the announcement. Renew all of the domains. Jenkins DNS domains. So that was under that system by Tyler. So I understand that Jenkins.io has been renewed successfully. Is my understanding correct? Checking to be sure. I remember which ones it is that are. Yeah, so Jenkins dash no Jenkins dash ci.org has been renewed. Jenkins.io will be renewed. Probable within the next 16 or 17 days. Upcoming the two weeks. Yeah, six next two weeks. OK. So thanks Tyler and Mark for. Yeah, thanks for the automation. All I did was monitor and Koseki. Oh, yes. Koseki for the naming. Was it a new one? No, I think for repos. This is this is different. This is that was an SSL certificate that Koseki did. OK. Certificate. Yeah, yeah, yeah, yeah, sorry. No problem. New items. We had a short outage on C.I. Jenkins.io earlier today. Peugeot duty show that. And the Peugeot duty and a few person like Stefan and I had a proxy arose from the Apache in front of C.I. Jenkins.io. Hum. So we status was updated and updated back once C.I. Jenkins.io was back. We have an issue to open to put what we did. No obvious error. We saw a peak in network traffic, a lot of incoming connection and outbound traffic as well. A lot of system CPU usage. So not IO, no IO, no IO peak at all. So it was definitely CPU waiting for network resources. So we check the logs of Jenkins. Nothing, we check thoroughly the Apache error logs. Nothing word. No word request on the access log as well. So we are not really sure what happened. That spawned for four or five minutes. We can see that clearly on data. So we will open an issue, put everything there. And I think we will need to ask the security team just for a double check just to be sure to see what happened. But in terms of infrastructure, nothing obvious that was correlated with a peak of bomb builds, though. Like 600 builds at the same time. But that's something we're used to. So not really sure. Yeah, I've been helping with the bomb maintenance and I've been thoroughly impressed with how well ci.jankens.io, there have been times I think I've seen queues of over 800 and I may have seen queues of over a thousand and it's still survived. So yes, that's a huge work between pipeline underlying work, tuning and infrastructure work. So thanks everyone on that because yes, that's work very well. No, obvious issue, metrics and I'll show a peak of network connection. So something we have in mind here for ci.jankens.io is that the tuning of the Apache server is not always the best. It doesn't have caching of static assets, for instance. But one of the element here, we are not sure that it was Apache. Datadog does not show something obvious. Apache was waiting for connections. So one of the working hypothesis right now is moving away from GNLP TCP connection and moving that web sockets because that will decrease greatly the amount of TCP sockets because connection are reused by Apache front server even when you upgrade web sockets and it will avoid exposing the TCP port for the TCP connection. So that could be a working hypothesis. Peak of TCP inbound GNLP because we saw a lot of agents spawn on EKS at that moment. Award on OSUSL unless you have a question about ci.jankens.io So I've checked with OSUSL because of an issue with the mirror. So first of all, I've introduced everyone from the team at least Marc, Stéphane, Hervé and Hy because they were still linked to Tyler for them. So even Olivier was an unknown person as I understand the email but I might be wrong. First of all, checked in with them. I'm going to run a book to write so the goal of this run book is to document for the team that when we have an issue with OSUSL resources so first, which OSUSL resources do we have and where to open issues. So there is an email. There is an email. We have to send the email and it automatically create issues for us. We don't have access to the portal and that's normal case for them. As a reminder, OSUSL is a sponsor that's Oregon University. They provide us virtual machines, mirrors for free and they use to also provide us poor PC machines on an IBM mainframe as I understand. No, it wasn't... No, OSUSL could provide those to us if we wanted them it's not them. We were actually using services from IBM for those. So my mistake, I misunderstood. So OSUSL could provide us more resources. They have facilities. They have PowerPC for sure. I'm not sure about mainframe, but they have PowerPC available for sure. And IBM pointed us to OSUSL if we needed them. But I don't see enough interest in architecture so we're not going to bother. OK. So the reason... but no interest for Jenkins project. So the reason I'm mentioning that I shared with Bruno that could be interesting to contact them for other architectures because they also have IRM CPUs and I assume they could be interested if they are not already providing RISC 5 machines as well. That could be something to check with them. We'll do. Yeah. So that could be interesting if we want to provide let's say on the age agents for CI Jenkins. If we want to validate regularly plugins or Jenkins corolles. So that's related to infrastructure. No questions on OSUSL? OK. So we have the issue there. I won't repeat. So Sunday we are going to migrate. Repo Jenkins CI GFrog is going to migrate for us the instance. Everything is on the issue. What about GDK depreciations? So we have two GDK depreciations to discuss or at least to mention. First GDK 8. So we are inside the critical area which is how to make the life of plugin developers easier. It's been sometimes that we don't, that there were some I forgot the word differences let's say between what the documentation and tutorial for plugin developers says what the Maven archetypes used to generate new plugin project from Maven and what the documentation of the shared library maintained by the infrastructure are seeing. We had three different ways. One of the main issues that for sometimes, quite sometimes we try to go on just call the function and let the default value take place which was fine until we reached the GDK 8 depreciation. So there are half of the people involved that state GDK 8 should just be deleted we should shift the default and the other half of the participant of the discussion are more, they don't like surprise so shifting the default value could create issues. There is no easy consensus there but one of the direction that everyone agrees on will be having a work to be sure that build plugin function starting from this week should not be called with the default value to avoid bite surprises because if everything is explicit at least the list of platform you want to build your plugin and test it against then you should avoid surprise in the future and we could change so maybe removing a default value could be something what's the impact? I was saying I don't think that's our job but would it be possible to have something automatically send a PR ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?