 Hello everyone, welcome to the Jenkins Infrastructure Weekly Team Meeting. Today we are the 22 of November 2022. So today around the table we have your servitor Damien du Portal, Hervé Le Meur, Mark White, Kevin Martins, as far as I can tell, there is only one E, sorry, one day I will be able to write it correctly. I just dropped it in the chat to you Damien, just so you have it. Cool, so that should be your handle in discourse. Yep. Okay, perfect. I think that's all, 1, 2, 3, 4, 5, 6, who am I missing? Oh, no, no, we are 5. Okay, so first announcement today is the weekly release. So I don't remember the number, I assume it's not finished. It had an issue on the packaging step, so it's minor issue. I'm not sure why, I'm showing the issue on the screen right now. At the step synchronize mirror, so which is really unusual, never seen that error before. There is a step where there is a Ersync command in charge of copying the file generated locally by the pipeline to a remote mirror through the Ersync protocol. And that Ersync step failed. At first glance, the error is a warning that say some files disappeared before we were able to push them, because Ersync initially list the files. Then it has a list of files and say, okay, I will copy them one after the other. So one of the files on that list has vanished when Ersync tried to copy it. Not sure why, that's weird. Is it the agent going somewhere else? Did the controller restarted? I'm not sure. Just survey so you don't feel bad. It's not the pipeline utility step, because that pipeline is still present on the controller. Unless you manually un-installed it and restarted the controller, which would have reinstalled it. I haven't restarted it, but I have manually reinstalled it. Okay, so since there is no obvious error here, but the script returned an error code 24. So I've kicked a new package build, because that one is important. That won't trigger a new release, the release has been done by the core build. Now it's the core package, we can re-trigger it as much as we want until it works. So that one is easy. That's the status of the weekly. Re-packaging, pre-packaging in progress. Then docker image and item of the release checklist. Is that clear, correct? Or do you want to add something about today's weekly? Nope, one, two, three. Okay. Do you have other announcements? Okay, so let's proceed. Next weekly will happen next Tuesday, as usual. So 28th of November, is that correct? That is correct. Cool. No, 29. No, 29, it should be 29, because then the next LTS is 30. Cool, thanks. Next LTS is 30. Okay, so please, next Tuesday and next Wednesday, please try to not merge things. On the infrastructure. And that will remind you. Yes, please. Next security release, not known as far as I can tell. Next major event still the first day. Well, and there's one other next major event. Mark has the action item to propose a two week delay in the delivery of 2.375.2. Because otherwise our regular four week clock would put 2.375.2. Right between the Christmas holiday and the New Year's holiday for many of us in the western world. And many of us won't be available during that time. So we would, we will follow, I will propose the usual pattern we've had in the past, which was delayed 2 weeks. Makes sense. Or that will be a way to push some Easter eggs on the next LTS release, right? But it's not, it's not who we are. That is, that is a terrible thing to suggest. Absolutely not. If you need bad ideas, I have plenty. Thanks for that info Mark. Okay, cool. Any other upcoming calendar thing? Nope. Okay, let's proceed. What have we done during the past week? Completely done. First, containerized Java 17 windows agents. That one was long overdue. We were able to use Java 17 on all Linux kind of agent and on windows virtual machines. However, the windows container agent are the most used for windows builds and tests for Jenkins core and plugins. So thanks Stefan for taking care of this one. The initial plan was having all in one image from backer image for having like Linux for having the same thing between container virtual machine. Alas, we are late on that area. It's not that easy because we lack high skills on windows. We have enough to make it work, but it takes time. So many, many thanks Stefan for this one. Next one is JDK 19, as far as I can tell. Yeah, we got the issue already. We keep having a bunch of Jenkins login access, but the pattern change, the past, the most recent one, the people are asking us to give them the access to admin account with the password admin. So thanks Ervy, because Ervy was able to ask correctly the question and have an answer to his correctly asked question, where our dear user say, yeah, it's for whatever private IP from whatever private organization. I have asked him, I haven't, I didn't close the issue because I wanted him to respond before closing it. Yeah, sorry, so I closed it too. I don't think this person will come back. But at least, we can say thanks for the field and the type because the first. I've also started a request on a Qtap, so we can pre-fill the issue type and URL field and the issue with the ID and everything. I'm curious, what is the IP location for that service? I assume it's a range of public IP of someone. So let's do an IP lookup. It's a public instance. Amazon data services. Okay, so it's someone using AWS in India. Yep. There's much more information. Okay. So is it okay for everyone if we continue seeing such, as soon as there is admin on the username, or that the URL is not one that we manage, we can immediately cause the issue with a message saying, hey, that's not the issue tracker you are looking for. I think it would be interesting to ask where they've been rejected to the Altersky. I'll try not to close too hastily, and I propose we close them during the weekly meetings instead. So does it make sense? Yes. Thanks a lot, LV. So login, Gira account, LDES template. So we saw it in action. Evergreen plugin has been archived. So folks for this one. So Evergreen used to be a project with simple but workable Jenkins instance that was sending telemetry to the Jenkins infrastructure. So we had to clean up and archive this element that project has been long gone. Jenkins file not be run for Jenkins CI bridge method injector. That was a request to add a project to CI Jenkins. More component archive, more login, login, login, login, login. Backload of already built item on CI Jenkins. Erwe, can you give up on heads on this one, if you remember? We were testing the incremental build of workloader a task step with fix from Jesse. Since two weeks. And since we didn't have any issue, Jesse merge his request and I've deployed this released version on CI. Thanks. So it looks like that issue has been fixed. That was a bug in that made some job that was stopped or cancelled. And they were still appearing on the build queue without any, we could send a stop signal, but they were kept in the build queue unless you run some groovy commands on the console. So thanks for helping fixing that bug, Erwe. We had a request. So thanks Stefan for taking care of this one. Jesse asked to install the pipeline GitHub plug-in. So the BOM project was able to get some additional pipeline keywords from that plug-in. Specifically environment variable used set by the plug-in on GitHub repositories. So thanks Stefan. And finally key cloak. That was only the cleanup, Stefan. Is that correct? Yes. Cool. We should have a few more bugs on AWS. Thanks to your work. More bugs or more bugs? Bug. Money. Sorry Erwe. There isn't any other question about it, but I think we should talk about the cleanup of 3DML. From our function I've put the link of your issue in the chat. Correct. So is that one done or not done? Done. Okay. No need to create an issue, I'm adding it to the notes. If it's okay for you Erwe. Yes, sure. So Jesse took the issue and he basically united the 3DML function we had in the pipeline library. And he got rid of the metadata. YAML file altogether. He united and hardcoded this parameter in the command line. And it was the last known usage of the read YAML function provided by the pipeline utility step plug-in. Which we wanted to remove as it's brittle and not as secure as we want. So I've removed it manually from all our instance from the Docker-Junkins LTS image. And there is only now the Kistomware Packager pipeline library function. Which is using write YAML and sign files function. But I didn't find any usage of this pipeline library function. Neither in Jenkins CI, neither in Jenkins Sintra. So I don't know if anyone knew any usage. I assume it was used for organing its name. It was used for Evergreen. Or maybe something like that. I think it was a project driven by Oleg where there was a website. No, no, no. So I think Jenkins FireRunner benefited from that service. But it was a service online that we hosted and we removed a few months or years ago. Where a user had a web UI and they could download a Jenkinsware prepackage with a set of plug-in. Or prepackage with whatever settings you want. And so the system was in background generating the zip and un zip and downloading the correct. A bit like a Docker custom image as far as I can understand it. But that project might have been GSO, I'm not sure about that. But that has been completely gone. So we removed it from the infra last year. So that was the main point. Maybe the library can be used by something else in the ATH, but I don't think so. I think it has been long cleaned up. No, it's not used in the ATH. Marc, do you have any memory about the custom work package or project? Yeah, it was a Google summer of code project and it's idle right now. So it was attempting to do that I'm not aware of anything that would require us to keep it running. If you find it helpful to shut it down, I'm not aware of anyone who's actively consuming it. I think there is only the function in the pipeline library right now. So if it's not used, I'd say to it clean up. Right, safe to remove it. Let's remove the last function. Last pipeline function custom war package. I'm sorry for that works. Just a reminder for everyone by removing that pipeline plugin, which was providing useful but dangerous methods because these methods are pipeline. So they were run on a controller. Why most of the time we want to run such element on an agent. So SH should do that. So if you see a pipeline on infrastructure failing because missing a keyword and that keyword happened to be on that list. Then that means that the pipeline on the pipeline library calling that keyword should be changed. All of these items can be replaced by a smart SH command except the node by label keyword. That's the only one that not that easy but the rest are just reading and writing specific format of files. So a lot are they on making the infrastructure safer. Any question or other topic that we forgot that has been closed. Okay, so let's move on the currently being worked items. Work in progress I'm taking them on the right list. Separate terraform backends and repository Azure net and Azure. So let me add a link. So that one was an idea initially by Hervé to have to Azure project for terraform. To separate the maintenance of the critical and really sensitive resources such as the networks with the rest of the infrastructure creating virtual machines cluster. So we could have different roles or at least different life cycle. So we could risk cleaning up a terraform project without risking to impact the whole infrastructure. That issue wasn't prior until today or at least until we discussed yesterday as the team because we discovered that our network are not IPv6 compliance. We need to recreate all of our networks and migrate or recreate resources to get to the new network. So we can support at least people from the outside being able to request infrastructure in IPv6. We have an help desk issue opened by a user two weeks ago about that topic because IPv6 is mandatory in India since the first of September this year. Which means we need to be able to provide our services to our Indian user with their default setup. So that means it's the correct opportunity since we are working on private cavities and then we will have to recreate all the Kubernetes cluster. We decided that okay that will delay a bit our work on ACP and the new cluster. But since we have to migrate everything creating from scratch on new network that are clean without the mapping issues with IPv6 compatibility. That should be the right moment to do it. So we might lose two weeks of work in terms of delivery timeline. But the gain will be IPv6 support out of the box and cleaning up all our network issue at once. So what's the status of that? Because I gave the why now it's the what and the how. Were you able to start working on it or are you going? I have created the repository. I want, I'd like to confirm your name. I've duplicated the Azure folder in Terraform state. I haven't committed anything on Terraform yet. Okay. And then we'll have to start working on creating or virtual network. Okay. Okay. So we keep it on the next iteration. And just let me add a link to the user IPv6. Okay. So that one is now a requirement for both private gates and a record. So more on this too. A bit later. Is there any blockers that we should or message on that topic that we should discuss now? Is there any thing not clear for everyone? Is there any question on that priority? No. You'll redact the end issue later if I am. Yes. So as part of this new network. So I need to write an issue about the new IPv6 private network. That will be between the issue we just talked about and about private gates. Because we need new network. And in order to join that new private network, we need a new VPN machine only for the private network. So I got an issue to write. I need to write an issue about private network. Ah, okay. This new VPN private machine. So yes, we have to create these resources and we have to work it out. More details on that part. We already shared knowledge with the recording, the free franchise on the room. Now we have to translate that knowledge publicly with links on an issue. Any question? Okay. Next issue we have on the list on the left is Migrate, Al-Qaeda, Jenkins IO component to Jenkins Infra. Hervé, can you give us more information on this one? So, Gevin transfert his concept for web components for Jenkins IO. In Jenkins Infra GitHub organization. I haven't closed the issue as I want to put in place the service account for npm. So, you wouldn't have to put a personal token in the repository to access the npm account even blocked in the previous weeks. Okay. Okay, for adding npm service account. Okay. Thanks for the status. Any question, clarification on that topic? Nope. Okay. Next one. Publish pipeline step dog generator and backend extension indexer artifact to some kind of storage. So, the name is misleading because the core of that issue has already been done since two weeks now. We don't need CI Jenkins IO to be up in order to generate the Jenkins.io website, which is a nice thing for the security team. There were two files generated by these two projects that are zip files containing an auto generated as a doctor format of the pipeline steps and other elements that are then consumed by Jenkins IO to generate the websites, some page of the websites. So, for instance, if you go to Jenkins dot IO documentation, whenever page and you go on the left menu to pipeline steps references, you see an automatically generated list of the pipeline keyword provided by all the plugins. So, the job, the goal of these jobs is to parse all plugins extract whatever information to generate these pages. So, we fix that by migrating the pipeline in infrascii, because we need a private credential that must not be on CI Jenkins IO publicly. And that credential required us to move the pipeline to infrascii at first step. But we forgot and that we have been reminded by Danielle and team, thanks for that, that end users still want or should be able to contribute to these two projects. So, they need a publicly available CI, which means at least build and test on this project. So, Stefan is following the work I did on the next item we will discuss on unifying the Jenkins file. So, we have one Jenkins file consumed by CI Jenkins IO publicly and infrascii. And only the deploy step is run on infrascii. Stefan, can you just give up a status where you are if you have lookers. I started by the first one pipeline step.generator and in order to have it to work fluently on CI and infrascii. I had to do two distinct PR to have the same label for the agent on CI and the other one on infrascii. And like that we can switch back and forth between the two of them that's compatible. Ok. So, works these two back and forth, force settings, agent labels and tools to allow unified pipeline. Is my understanding correct? That's perfectly that. But controllers. Do you want that way more way based than me? No problem. I just want to be sure you understand. That's exactly that. Ok. So, is it ok to move that issue to the next milestone for you? Yes, please. That one is 29 already. Let me clean up. Ok. So, the next item on my left list is stories is not handling pull requests. So, I've already finished the work. So, that's the same idea. We need both public and private. Infrascii deploys previews websites on pull requests but also deploys the main website on Netlify. So, it has specific credentials. It must be private. Still, CIG and Kinsayo need to at least build, link and test the project publicly. So, the unifying work has been done by Stefan and I in pair. So, that Stefan is now autonomous to do the two others as we just stayed. I was almost closing the issue but I did an issue. On Infrascii, the size of our virtual machine on a Kubernetes cluster is a bit too small compared to the amount of resource that we require. In the case of CIG and Kinsayo, we have big machines where we pack a lot of pods. We require 4 CPU and 8 GB per pod. But the machines are limited in CPU. So, right now Infrascii is not able to deploy anything for that project. So, I need to fix to find a solution for this one. My option is to add a worker pool on temp private gates of kind I memory with bigger machines. So, that should be usable by this build. And then, if it works, I will open a pool request on Terraform for the private gates to add that same kind of node pool. So, I'm adding it to the next iteration, that should be okay. Unless there is a question, you need clarification on that one. Let's continue. This issue is removed Windows slave plug-in from CIG and Kinsayo and related server. So, Mark, I remember that you removed the plug-in or you checked at least it wasn't installed on our Docker image, is that correct? I checked that it was not referenced on the image, but that didn't uninstall it. But thankfully somebody else did the work to uninstall it. I understand it's been removed. Thank you very much to whoever the kind person was that did that. Yeah, since we had the warning issue while I was upgrading plugins on CIG and Kinsayo, that was the opportunity during a restart to test. So, I took the opportunity. So, is there anything left on that task or should we call it closed? For Infra, I'm not aware of anything. If we discover some reason that we can't remove it, one of the problems of can't remove it is because it has such an ancient Jenkins baseline. It can create an implied dependency on other ancient plugins or other ancient plugins create an implied dependency on it. So, if we have any ancient plugins, my first preference would be get rid of them too. But if we can't, then we could do a new release of this plugin to cause it to depend on a newer Jenkins version and resolve that issue. If we find we can't get rid of it in some place, let me know, I have permissions to release a new version if I need to. Do you think you will be able to continue the last check of the controller? Do you want to delegate it to the team for that specific issue? Let's see, the other controllers are, it would probably be easier on me if others, if someone else took the last checks, simply because I'm not sure that I'm always on the VPN or I'm on the VPN frequently enough to find all the controllers. Ok, any volunteer? Ok, so I'm taking it then. Thanks a lot. So, one last check of all controllers before closing. If implicit dependency from another plugin being marked for a plugin raise to help. Any question, any clarification? Ok, cleanup of unused resources in Azure. Ervy, can I let you explain the status of this one please? The reason and the status. The reason is because we saw, I wanted to make a review of the rest of the scope in Azure. And we created this issue so we can follow a cleanup depending on what we reviewed. And the first review we made, we've made was about the bread confluence resources group. There was two database used before for the confluence we hosted. And each of them cost about $150 per month, so quite a nice economy here. I've started the dump of this main SQL database. The first one is, the backup is complete. And the second one, I have a few issues while running the dump from a Docker image running locally. So I'm doing it now from, I have to do it now from the latest virtual machine. So I won't get network error or dump error. So the backup is almost complete. It has to be complete before we debate this resource group with each database. Cool. And so I'm giving the status about the prod community function. So it's in the list to be cleaned up, because we decided after confirming with at least team, I think Maroc and Gavin, that the repository could be archived because it was used only for Evergreen. So in Azure, we have a resource group with Azure function, including this one, but also some infra function. So I have to send an email on the mailing list, but the developer mailing list, just to be sure that no one is using this function, because we saw some, a few calls like one to four on the past month, but just a few. So we just to be sure that it's not used for statistic generation or something we don't, we are not aware of, because once we have deleted this resource group, we cannot go back, because there are data on some repositories. So the goal is to ask before delete. The cost of this one is not so much as you can see, but it increase on the past month, not sure why. So that's why better to ask before removing it. What do you think of creating an issue for, it's full of cleanup, like this current issue just for spot confluence, creating another one for spot community function, and link them to the review one. Yeah, I will close this one in favor of the two new resources then, but yeah, okay. I think this one when I am, when the backup will be complete and complete. Yes, absolutely. Let's split community to another issue. And ask on the mailing list. So I'm on this one. Marc, do you have any memory of community of Azure function that could be used by the info which are not the community of green functions. No, no, not, not any longer and Azure functions. Yeah, we want to get off them anyway so even if we found that hey there was a serious problem that we inadvertently did. I suspect we still then say we want to find a way to replace that service. I'm not aware of any, any requirement for them. Okay, cool. So more issues, I'm adding this issue to the next milestone. And as explained by our way, we will try right now, weekly and then monthly in the future, just to review the resource group and their role in Azure just to be sure that we don't have some dormant resources consuming us money. Thanks everybody for leading that. Any question, clarification? Nope. Okay. Windows agent on CI, Jen Kinseyo disconnect prematurely. I thought we were done with this one. I thought too. Yeah, because the last step for Stefan was to remove the windows label on the agent. Oh, don't we need to update acceptance test. Job before closing probably. Okay. Last step update acceptance test job to stop checking this label and come and check the new real ones. Stefan, it's allocated to you. Is it okay to walk on it next? Yes, please. Because I thought that I've been done. So I'm sorry. Okay. Next. Is the private. The new private. Cluster. So before asking RV for status because you walked on it until the discussion around the networks. So that one won't be moved to the next iteration because it's obviously now we know that it's blocked by the creation of the new network. So I've already did the evening of that. So we have a cluster ready on our current private network. It's almost working. A lot of work was done. It's terraform manage, which is the value of that task. So we should be able to recreate it easily. Once we will have the new networks. So every first of all, can you give us a status assuming that if we don't have the network issue and we were able to migrate. Did you have blockers next step to be done so delayed, but to be done. The next step for this cluster would have been to test our jobs in this instance. Okay. I did a storage class. I have. So. Manager. Rack me. Data dog. If you can prove a 10 jinx ingress. Working on it. Also. Jenkins controller. The. Nice job. That improve a lot the. Maintain ability for the future of these elements. Proposal. Since that issue is the priority ties in favor of the private network. Is it okay for you if first we move back the. That's associated pull request on Kubernetes management back to draft. This one is only for public private. I would merge this one if it's okay. If it's good for you. It has only the. Private public and jinx. Let me rephrase. We will have to trash that cluster to recreate it. So which mean what do you think if we. Remove that cluster to not pay money for that until we are done with the network. Which mean. Merging. This one will not make sense. But. I. What. What would be nice. It's not obligatory nothing but. Merging. This one I can't prepare the next one with the separation of. Between the term private and the public. In the private. Even if it's not. I would have my pull request for the next step. Ready. I'm still it's still an extract from my. First request. Which was recouping everything. And my proposal to move it back to draft. Not closing it so it's still there. We don't close your work. But since we are going to trash. The cluster for not paying it. That will not make sense to merge it. But I don't want to merge it. But I want to keep it. Yeah. I see it. This part. Will be the same. Later with a new cluster. This part. And I could have open. I could have. Go further. Preparing the next. Request. This one I want to come back on this. Yes. OK, I understand. But then if we merge it. Then you will have to open one pull request on terraform. To. To remove the cluster. And second you will open a second pull request on that repo. To remove private gates from tentative of being deployed. Otherwise if we destroy the cluster. We will fail the Kubernetes management job. That's why. OK. So is it OK for you so then. If it's ready. Can you update and just. We can review and merge it so it will validate your last assumption and close the transaction. And then you will have to clean up. The cluster and but. OK. Cool. So can you update the pull request on Kubernetes management so we can proceed after the meeting. And so you should be able to remove the cluster tomorrow. Sound good. Yep. And then we will need to update it further after the meeting then. I will comment. So just sorry moving to infrasync next. And I will add a comment explaining why. The last item is realign repo genkin CI or mission. So Mark and I have spent some time yesterday we will write after that meeting. The first thing is that. Artifactories eventually consistent. Sounds like Daniel confirmed that. confirmé que quand vous créez une nouvelle déposité virtuelle, parfois ça marche, parfois ça ne marche pas. Au moins, le système dit que c'est créé suffisamment, mais ensuite, vous avez de l'abandon, dépendant de l'instance distribuée que vous mangez, vous avez de l'abandon, le comportement. Donc, nous avons besoin d'une fin de journée avant que tout soit synchronisé sur le système. Ce sont des problèmes qui nous ont rendus plus tôt. Mais maintenant, nous sommes encore en train de valider si nous pouvons avoir un path rapide. Maintenant, nous devons pouvoir confirmer demain. Donc, nous faisons des travaux techniques, nous n'avons pas touché les choses existantes, nous nous créons des repositions virtuelles sur le côté, pour confirmer quelles sont les scénarios. Une fois que nous avons confirmé de l'un ou l'autre, Marc est maintenant en train de finir et il faut que j'aie des propositions avec le path de réagression. Et la prochaine étape pour moi, c'est de travailler sur l'application LDAP. Donc, au moins, nous pouvons avoir une réplique idéalement active-active. Donc, ensuite, l'application d'une réplique ne devrait pas se détruire dans l'application LDAP. Qu'est-ce qu'il y a pour cette question ? Ok. Donc, c'est tout pour le progrès du travail. Nous avons des problèmes, des problèmes nouveaux que vous voulez ajouter. Le générateur remet la Java 19, Stéphane, Oh, oui. seulement si vous avez le temps pour, jusqu'à la prochaine réplique. Sinon, il reste ici au-dessus de l'application LDAP et vous allez le prendre seulement si vous avez le temps. J'ai promis, j'ai promis, j'ai promis. Ok, donc, ça ne me répond pas à ma question. Je vais le faire jusqu'à la prochaine iteration et vous signerez avec votre bras que vous allez le faire. Non, non, laissez-le ici et je vais le faire dans la prochaine si j'ai le temps. Mais d'abord, j'ai besoin de finir les autres mots. Je l'avoue. Ok. Oui. Daniel, on a une question sur les sites. Au-delà des tweets, je ne sais pas pourquoi depuis que les tweets sont publiées par les sites de l'ISSF, par les sites de l'INSITE. Oui, par l'INSITE. Quelqu'un a voulu se parallèler et maintenant on s'occupe de la parallélisation. Mais au-delà du tout, comme réveillé par Daniel, ce n'est pas une priorité superbe. Non, ce n'est pas une priorité superbe. Donc, ne parlez pas de ce sujet. Pardon? Je voulais parler de ce sujet mais pas de la priorité de l'occuper de ce sujet. C'est sur l'alternative, si on a l'air de l'envoi, je voudrais mentionner quelque chose que j'aimerais aussi. C'est d'avoir un directeur lingueur de qui on va pouvoir avoir un certain nombre de statistiques. Parce que, pour maintenant, quand on prépare quelque chose sur Twitter, sur LinkedIn ou autre chose, il n'y a pas de statistiques sur ces clics. Et comme Daniel s'est dit, on ne sait pas si les gens sont cliqués sur ces clics ou pas. On ne sait pas si il y a de l'importance ou pas. Il nous permettrait d'avoir une visibilité ou une publication de médias. Ce n'est pas seulement une publication de médias. C'est un bon point. On a des tools à l'alternative? Ce n'est pas un tool à l'alternative, c'est vraiment simple. Je ne veux pas récourir l'IP, je ne veux pas récourir. Je veux juste récourir le code. Il n'y a pas d'analytique sur le marché, qui est un complément de GPD, comme pour aujourd'hui. Juste une note. Ce n'est pas possible en fait. Oui, c'est pourquoi c'est une simple chose. Donc, pensez-vous qu'il devrait être un issue associatif? Oui, je pense que j'ai ouvert un issue pour proposer un coûtant ou un rédacteur de quelque sorte. En termes de priorité, même les statistiques ne sont pas très bas. Je veux juste déclencher le facture que si vous voulez travailler sur cela, cela sera une contribution au niveau de la priorité de l'INFRA. Donc, n'a pas de problème pour travailler, parce que c'est valable et vraiment utile. Oui, il n'y a rien à faire. Oui, nous avons besoin de l'infrastructure. Oui. Et je pense aussi sur ce que l'Elon est en train de faire sur Twitter. Donc, peut-être que nous aurons besoin d'abandonner l'assistance sur Twitter à un moment de temps. Je n'ai encore des bêtes sur les laitiers qui devraient oublier l'Elon sur Twitter. Ok, donc, oui. C'est un sujet polymique, je ne sais pas, mais c'est vraiment une bonne idée. Merci d'avoir mentionné le sujet durant la rencontre. Je pense que l'outil va être élevé de l'issue avec cette idée et de le partager avec l'advocat de l'ASG, synchronisément à l'opique. J'assume que ça a été discuté, donc je suis juste un débutant ici, mais oui, c'est vraiment une bonne idée sur beaucoup de médias. Et oui, si vous pouvez l'élevé, ça pourrait être un service de l'infrastructure avec des données anonymées, c'est vrai. Cool. Qu'est-ce que nous avons sur Java 19? Un issue ouvert par Stéphane, aussi. Nous avons Quotas, off-pods. Oh, oui, j'ai oublié ceci. L'année dernière, durant la rencontre de l'équipe, nous avons dit qu'on devait ouvrir l'issue. L'issue est élevé. Je le mets à l'infrastructure. Si quelqu'un a un temps, c'est seulement un bonheur. Juste d'être sûr, les deux confégations de file qu'on a sur deux différents locations devraient avoir le même nombre. Je pense que c'est tout pour moi. Dois-vous avez d'autres... ...tropiques? C'est tout pour moi aussi. Cool, donc je vais... Nous avons le warning, mais... Je pense que je vais trouver ça. Il y a quelque chose que j'ai mentionné que nous devons être attentionnés de, mais j'ai focussé la date. Certifier pour Rippo, Jen, Kin, Sayo. C'est celui-là. Donc, nous avons deux. C'est vraiment important. Jen, Kin, Org, G-Frog. C'est expérimenté... Je pense que c'est le 18 de décembre. Donc, l'année dernière, je ne me souviens pas de pourquoi, donc je ne peux pas dire que maintenant, nous devons créer un soucis pour ça. On a besoin d'un soucis. On a besoin de le faire rapidement. Ou alors, nous devrions briser G-Frog pour le réel. C'est le 18 décembre. Oui? Ok, cool. Donc, l'année dernière, et je ne sais pas pourquoi, KK manualement a issu la certification et a envoyé ça à G-Frog, en crypte, bien sûr. Parce que nous devons envoyer la clé privée et tout l'élément de la certification pour qu'elles puissent installer sur la repositorie. La certification doit avoir le bon sang, les networks de subject. Parce qu'il y a un osname directement dans G-Frog Namespace, et nous avons aussi la repositorie, donc la certification doit avoir les deux osnames, en tout cas. Nous devons obtenir le sang de la certification de la couronne. Donc, l'issue va requérir moi ou quelqu'un d'écrire sur la discussion de DeepDive qui s'est annoncée l'année dernière. Ça a pris du temps pour KK, donc j'ai proposé cette année, nous allons faire la certification de la réunition par nous-mêmes. Nous devons pouvoir le faire à l'aéroport ou de Kubernetes avec des records DNS. Avec Let's Encrypte, c'est plus facile. Marc et moi avons une rencontre avec G-Frog la semaine prochaine, si je suis correct. Donc, nous allons prendre l'aéroport. Nous devons ne l'oublier, Marc, pour leur dire qu'on va aussi avoir besoin d'écrire la certification. Donc, depuis qu'ils n'ont pas de revenus pour migrer l'instance à leur nouveau sas système, nous devons leur faire connaître parce que si nous devons prioriser, c'est-à-dire, les certifications parce que ça va requérir de leur réunir à peu près quelques minutes d'outage. Il n'est qu'il est expecté, mais il y a encore une operation manuelle pour être plantée. Merci pour les souvenirs, Stéphane. Et comme Marc l'a mentionné, domaines genkins.io et genkins.ci.org seront réunis en décembre, est-ce correct? C'est en décembre ou en janvier. Nous avons des renoues pour plusieurs domaines. Je peux vous donner des dates exactes si vous me donnez juste une minute pour faire un petit look. Et c'est Tyler Croix. Il a un automatique qui le fait. Et la dernière fois que je lui ai abattu, il m'a dit, Marc, dis-moi quand c'est moins que 28 jours de renoues pour que mon automatique ait une chance de rouler. C'est très, c'est très raisonnable. Donc, 7 janvier 2023, le domaines genkins.ci.org s'exprime. Et 7 janvier 2023. Et puis, le domaines genkins.io s'exprime 27 janvier 2023. Ok. Et en tant que je sais, les deux sont automatisés. Si on en a plus de 28 jours et ceux n'ont pas été renouvelés, je vais commencer à serrer sur Tyler's porte pour voir qu'est-ce qu'il y a avec l'automation. Est-ce qu'on doit changer à la dernière fois que je lui ai demandé sa réponse est que je ne veux pas changer l'automation, etc. C'est pour ça qu'il faut continuer à rouler. Ok. Donc, on va prendre care d'adverter les groupes des calendaires pour 20 jours avant chaque. Donc, la équipe mondiale va avoir un souvenir. Oh, ça serait bien. Merci beaucoup, Damien. Mais je ne veux pas partager Tyler's adresse, non? Oui, oui, non. Et, et, et, oui, on ne devrait pas bâtir Tyler. Il est encore capable de aider le projet, mais il n'est certainement pas activement involved dans le projet. Juste, juste lui donner son numéro, c'est ok. Donc, tu ne sais pas, Tyler. C'est pour ça que tu es brouillant. Mais, croyez-moi, tu n'as pas essayé de essayer de bâtir il est clairement meilleur que l'année dernière que les quatre d'entre nous. C'est sur ce sujet. Ok. Il a été mon manager pour quelque temps. Olivier est le manager. Oui. Et Marc est le manager aussi. Il a beaucoup de sens. Merci, Tyler. Si, un jour, par aucun sens, tu as écouté ce monde. Merci pour le travail. Tu fais pour le projet de Jenkins. Je pense que c'est tout. Qu'est-ce qu'il s'agit d'un autre sujet? Non, je pense qu'il s'agit d'un autre sujet. Ok, donc je commence à partager et à récorder. Je dis bonjour à tous les personnes qui l'entendent. Et ensuite, tu peux dire quelque chose pour qu'on ne s'arrête pas de récorder. Au revoir et à la prochaine semaine. Au revoir. Au revoir.