 Hello everyone, welcome to the Jenkins Infrastructure Team Meeting. We are the 5th of September 2023. Today we have myself, Damien du Portal, Hervé Le Meurre, Mark White, Stefan Merle, Bruno Verrarton et Kevin Martins. Ok, so welcome everyone. Let's get started with announcement. First the weekly release. As far as I can tell, the weekly release 2.422 est out. We are packages and Docker image. Mark, Kevin, I believe you are, if not already done on the changelog of that release and the last items, is that correct? Change log has been merged. I wanted to see if it's visible. Change log is visible. Container verification, not yet complete, but shouldn't take much to do the container verification. Ok. Thanks, so that means one action item for the infra team. We can proceed to update the Docker Jenkins weekly base image and deploy it to infrasci and weekly CI. Ready to go. Ready to use. Ok. Hello, we have a new participant who joined us on the IC Channel 1 or 2 days ago. Hello. Yeah, sorry, I'm a bit out of support. No problem. I'm pretty new to open source as a whole. But yeah, I feel like I could give some great contributions to Jenkins as a whole. I mean, I use the work with Jenkins in the past and yeah, I'm pretty much grateful for what has caused me to own a salary anyway. Cool, so I've shared, I'm adding back the link because I don't see, we have collaborative notes for that meeting that you can follow, at least un read only. The notes have been added on the IC Channel. It's akmd in link. So that gave us the same structure every week when we run the meeting. So if it's ok for you, we will proceed on the usual step of the meeting so you can see how we tend to proceed. And then we will take some time at the end. If you have a question, if you want to discuss some elements that are unclear and if you have a point you want to contribute to. Is that ok for you? Yes, that is really ok to me, thanks. Cool, so we were just on the announcement area. So the weekly release is out as every week. Do you have other announcements folks? I don't either, so let's have a look at the upcoming calendar. So the weekly release 2.423 next week, 12 septembre 2023, as usual. Nothing special here. Is there something special about that release? No? Cool. Next LTS, of course, I have already forgotten when it will be. 2.414.2 September 20. September 20. Ok, so we have sometimes before thinking about that LTS release. So well, comma, release candidate tomorrow. Say tomorrow. And Kevin or I are responsible for the changelog. Huh Kevin? Nice, that's correct. Thanks. Do we have a security adversary? I believe one has been an answer here today. So we will have a plugin security adversary tomorrow. Correct. September 6 2023, tomorrow. Plugins only. So please don't break trusted CIA tomorrow. I did it today for that exact reason. Next major event, we have the upcoming DevOps world, where we will have community members present. Is that correct, Marc? That is correct. So DevOps World Tour in New York is September 13 and 14. And then Chicago is, I believe, it's September 28. Cool. So I believe both of these editions will have one, at least one community member present. Correct. I take it back September 27. And then Santa Clara is the next month, but we can talk about it later. Ok. So if you want to meet some of the community members physically, that's a good opportunity to have some. At least the one based on the US, because I believe none of the poor Europeans here are going. Yes. Ok. No more major events. Nope. Ok. So let's start to review the work we were able to close and finish during the past week. So that's the milestone, which ID is 79. I'm going to proceed on the order of my screen. Sometimes the order changes on the notes. So we will update on the go. The first one is remove IP restriction on bounds for migrates to VPN. So it's finished. So since 30 minutes, if you need to reach trusted CI Jenkins IO, you absolutely need the SSH bounds gateway, but you also need your VPN access to be enabled and connected. And you need a user account on the VPN, which is allowed to root trusted CI virtual network. So I believe Stefan, I, Hervé, Tim, Alex have that access because they are known to be there. Mark, I haven't had the access to your machine because I know you haven't set up the VPN. But if you need it, that means you need to set up your VPN. Yes, and I am way behind in getting my VPN configured. No problem. Now we only have one VPN, so that should be easier. Just a note, that required peering networks between each other. And the naming that was used for the original virtual net was creating issues and overlapping. So earlier today, this morning in Europe, I had to destroy the free trusted virtual machines and recreate brand new on the new virtual network. So that mean our confidence and ability to restart this critical part of the infrastructure is good. We can do it in less than 24 hours without losing any data. It's just that it kept the update center index to be updated during two hours. That was the external impact. Haven't seen any other problem. I reported what I saw during the bootstrap minor issues, but at least it's written if someone has to do it in emergency in the upcoming months. Is there any question, things unclear about that task? How can we proceed to the next one? One, two, three. Okay, next one. Stefan, virtual machine improved command prompts to avoid confusion between services and controller. Can you just give us a report of what you did and what is the result of that closed issue? I did not match. I extended in the poopette the way we provide the prompt. And now we have the variable and the colors directly within the prompt. Of course, the screenshot I did was before the merge and it's from Vagrant. So we can see only local host. But we should see now that it's merged. I haven't checked since it's merged, but we should see the correct name, the full name of the controller. Exactly. The goal was to avoid confusion when some operational members are on one of the free virtual machines of the free controllers. Just to be sure you don't run a command expecting to run in a trusted CI while you are running it on CI. These are not the same instances. I still need to change the nice to have, the nice to have is not done. Changing the color, depending on which controller, is not done. OK. It's nice to have so you can command back on the issue later. But the most important part is there what has been requested by the Jenkins security team. Now they have the fully qualified domain name inside the prompt. So now the color, you have to decide a color with Daniel. Yes. Deciding color is always the most difficult part. Rest is implementation. Non, non. I will let him decide and I will put what he wants. That's fine. The specificity is color blind, so it's important to find. Of course. Yeah, that would be hard. I think it's really great that Daniel has this particularity because he's really keen on detail and so he noticed a lot of things. But that's... Yeah, absolutely. That's really great help. Thanks for the work, Stefan. Still your turn. ATH build commonly become in responsive. That was an old issue and there were one last step. We wanted to run the first try on building an ATH step on a spot virtual machine instance because it's really cheap, 10 times cheaper than the usual instance. And we wanted to implement something if we have to retry because the spot instance is reclaimed by the cloud provider. We want the second run to execute on an on-demand instance to ensure that developer still comes to a result on the end. So we don't want an infrastructure spot-related issue to break the build. Stefan, can you report? Is it okay for you? Were you able to... Yes, we even tried to kill one, not manually one spot instance manually to make sure that the second try would be on the on-demand one and that worked. So, yes, that's working. So we should start to see results when I will do the weekly billing reports on Azure Clouds. So in one or two days, I will report there next week on that result. We should see the result, the direct result of that because we are back to spot instance by default with this. Any question? Things unclear? Cool. Thanks for the job here. SSL Certificate for CIG and Kinsayo was expiring in only 23 days. Thanks, Mark, for raising that issue. It's fixed. I've detailed on the issue, the problems. When I migrated CIG and Kinsayo to the new virtual machine, in order to bootstrap it without Apache failing, I copy and pasted the former certificate on Let's Encrypt setup. It appears that when you create a Let's Encrypt certificate using Certbot, it generates what is not an account when connecting on the remote API to differentiate call from different virtual machines. So the new tentative was trying to use a new account with a different set of parameters. And one of the parameters that were required for a successful renewal has been set manually years ago. I tracked it in 2017. But since it was persisted, the renewal always used the same configuration. But that thing was done manually and never put inside configuration as code, hiding it from us. So thanks to that, I was able to understand that issue. I've linked the nice article that helped me to understand the problem. And then it was fixed and did pull request on Puppet. So now that one shouldn't be a problem anymore. So the certificates have been renewed after applying my fix I didn't renew manually. I let the system do it. And it worked. So thanks Mark. Is there any questions or things unclear on this one? Just, I'm sure. I mean, have you guys ever thought of a way to monitor the SSL exploration before? I think everyone has thought of that already, right? Yes. We have Markweight for that right now. That's the reality. And Markweight has a system that does it. I actually use CheckMK. And we've also got Datadog that we will eventually use for it. But right now, we haven't implemented it in DataDog yet. That could be a nice first thing. Can I ask you just to take notes and then I will write them. Can you take note on your own? That one could be a nice newbie issue because our Datadog is managed by Terraform, right? Yes. Sarah already has a text, I think. Oh. So, we might want to look at them before creating a new one. So, that needs to be checked. But I think we need to update them because they are checking. If the... Yeah, I remember when we shut the virtual machine down, Datadog alert us that the certificate is not up to date. I believe it's not checking 30 days before renewal like Mark is doing. So, that could be a nice upgrade to the existing monitoring. Does it answer your question? Yes, it does. Cool. So, that means we should come with a new issue that could be taken if you're interested soon. Worst case, that will end up in the act of a fest. But if you're interested, since you told us that you knew a bit of Datadog, you should be able to start checking our Terraform management repository, which is public. All right. Lovely. Let's take a moment. Should we codify that into an issue or some sort or... Yeah. Yes. Yep, absolutely. If you are willing to write an issue for that, that's perfect. I plan to write it after the meeting and point it to you. But if you are willing to open one yourself, please go ahead. That will be a great help for us. All right. Thanks. It's all to improve monitoring. Okay. Thanks for this one. Any other question on that task? Nope, cool. Next one, delete the legacy VPN related resources. So we used to have former networks and these networks were only available through a VPN instance that lived at VPN.genkins.io domain name. We don't need this network because we have migrated the last remnants two weeks ago. So we removed that costly virtual machine. Yes, that was a hate CPU, 24 gigabytes machine just for running a single VPN. Because back when it was created that was the only way to have more than two network interfaces on the same machine. We don't have that constraint anymore and we have migrated networks. So less money to give to Microsoft and more money to spend on builds of the ATH. Any question on this one? Okay. Plug-in build pipeline stopped working. I believe that was so there was an error on a contributor CI build on CI.genkins.io And the problem was located on the settings XML that we use for Maven that we use to capture all Maven requests to retrieve dependencies. So these requests are sent to our artifact caching proxy system. And in that case that was a word an exotic Maven behavior to say the list when you define plugin repositories instead of repository in order to download Maven plugins instead of other dependencies. So we deployed a fix that so I provided a short term fix so they were able to continue working on their Jenkins plugin. Then as infrastructure we slightly changed the settings XML deploy so we don't have that problem and we prove that rollbacking might change still work and that allowed us to work on artifactory bandwidth reduction challenge. There is just one little problem with this one it broke the integration tests of the Maven HPI plugin. Thanks Mark for pointing that out during a banking day on the US I see someone was behind the computer Monday. I reject your attempt to govern what I do on my day off. Thank you very much for trying though that's very good. Fair, fair. That's a fair one I attacked you first sorry for that. So just I forgot to link just let me link the issue that was Jenkins Here is the for info we closed this okay we will keep that setting because it's the only way for us to not depend on Apache control and the problem that has been caused is an integration test setup so that one should be easy to no no it won't be easy but it's not a blocker it's not blocking anything it's just something to to resolve. Is there any question about that problem? So just for clarity you're saying that that really what you learned was that was a test failure and it's a test failure specific to our configuration. Exactly and we got it thank you the fix that we deployed to close that infrastructure issue here needs to be reported somehow that's the somehow that I'm not sure and that it's complicated inside the settings XML provided to run the integration test of the Maven HPI plugin. Got it okay thank you thanks for asking because I might not have been clear and it was clear inside my head but maybe not here any question can I move on okay next one Migrate 3rd CI Jen Kinsayo so 3rd CI Jen Kinsayo which is a private controller used by the Jenkin security team to prepare security advisory in advance has been migrated to the new private network that's the reason why we were able to delete legacy VPN the virtual machine is now having more resources is cheaper and is using more powerful agents which are also cheaper so that should be a good thing for everyone more power less money that's perfect I haven't seen any issues since past six days on this one the Jenkin security team user all confirmed they can reach and work with the instance so the issue has been closed is there any question or clarification here okay one last thing we had a password issue that was closed because it wasn't related to an account so now held up but someone didn't gave us enough information and it looks like they were trying to use their own Jenkins never get the answer from the reporter so closed as no action required is that okay for everyone no question so now the next step is let's check the work in progress and for each of these elements we will evaluate if we need to continue working on them for the next week on the upcoming milestone our topics right now to add immediately which is not covered on the work in progress okay so let's cover our work in progress milestone thing I'm switching to the open issue I'm taking them again on the order here on my screen that might be a slightly different order in the notes I let you map the points remove remnant of the legacy virtual networks in fact all the overlap network has been deleted but we also had of an old virtual network that cannot be deleted in Azure as I reported on the issue every tentative to delete that network with subnet and an internal server error from the Azure API so I try to follow everything from their automated support answer you know it's clippy but for Azure cloud which is really good assistant by the way way better than clippy I've been amazed by the links to documentation are very precise with good indication so that was really useful but on the end it kept sending me the HTTP 50 error so contacted the support and it looks like they are involving their backend team they see we have former resources that I can see on the Azure portal or the API but are linked and connected to that network so they are trying to either surface this hidden resources on the UI so we can delete them properly or remove the whole thing on the backend and we can forget about that network so I'm not closing this issue until we don't have a feedback from their support yet is there any question about that issue cool so I'm keeping this one on the since there is no action required this one won't be on the next milestone it will slip somewhere until we have an answer okay for everyone next one top priority assess artifactory band with reduction options so I haven't that time yet to report the result but we are ready for this week to remove definitively the mirroring of the Apache central and artifactory the plan is to do it this Thursday since we have a security advisory tomorrow the result is that with the settings we mentioned earlier about plug-in repository hack settings xml I did not see any warning or errors except the integration tests and a warning somewhere on a misconfigure repository so from infrastructure point of view I will report everything I tried jobs in CIG and Kinsayo job in trusted I don't see any more blockers on definitively removing Apache central that's the summary I need to write details so it will be checked one last time before Thursday but unless there is any objection I propose we proceed we announce this so we should be able to ask Gifrog for a last report for this week or at least until Thursday and then we will be able for the next week to see the impact of on the bandwidth download on artifactory is there anything you want to ask on that topic because that's the most important do we need do we in terms of informing people you'd already sent the email to the developer list that seems like the most crucial thing do we need a blog post as well to give us a milestone and in time and if so do you want me to write the blog post since you're nearing the end of your day do we want to enlist somebody else are there or no blog post what's your feeling that's a good idea to add a blog post to mark the milestone and increase the communication layers I think that should be useful but yes I wouldn't say no to some help for writing it we need to decide the operation time Thursday the operation time should be first an artifactory removing two of the 16 mirrors the repo one and maven repo one that we have and second an operation on each cluster Kubernetes cluster to clean up the ACP caches just to avoid some warnings about repositories and changing repository packs and now when the ACP cache is cleaned will that cause a repopulated of the cache so that we'll see an initial heavier load and then we'll back off because we will stop we will have cached everything again absolutely okay great all right so so we shouldn't be too dismayed if there is high activity in next week's bandwidth report but the following week we expect no no more requests to the copy of central that we're no longer giving no longer distributing absolutely and there is no easy way to clean to keep the same cache because maven keeps an internal tracking of where each artifact comes from and that one is encrypted so changing it means decrypted everything recursively changing and re-encrypting technically it's doable but honestly I prefer increasing the bandwidth one time right and and I am sure that JFrog would agree with that that's no question they don't want us to do those kind of terrible awful things no we meet JFrog later today Mark is that correct so I propose that that will be the last go no go we report to them and if they say it looks good for them then we can proceed Thursday is there any objection question need for clarity on that topic no objection from me cool so artifactory brown out was success let's plan for removing repo one but there will be 7 september is that correct of september september they mean to report in details plus plan four and I assume when you when you said that the time of day it's probably healthier for us to have you do that start of your working day so that if there are surprises we don't don't end up keeping you late it doesn't matter I won't be any help resolving things but you and Daniel are both good candidates to help resolve things if there's a really an artifactory issue good idea early day Europe working day of European team okay that's a good idea so that would let us plenty time to roll back if we see issues right go no go with G frog later today did that capture properly in the notes what we said yes cool so almost there if we are if it's if G frog is happy with the outcome in let's say two or three weeks that looks like we will be able to close that topic and proceed safely that means they won't kick us out of their sponsored manage instance of artifactory because an artifactory of that size is not a pleasure to work with especially on a public cloud next topic migrate update junk in sayo to another cloud so that topic was I took some tiny measure last week but I didn't do much and handed over to herveille who is back from holidays since yesterday welcome back herveille so herveille I let you the pain of reporting on that topic so you did stuff as I understood the main thing you've done last week was to validate that we could add mirror bits mirror bits mirror with see her sink serve a demand URL you you have been able to use the internal service the internal Kubernetes service URL so we don't need the IP and DNS records for this instance but it can be useful for other mirror we can dispatch in different cloud providers like digital oracle or else and another big part of the work you've done is to upgrade and fix your sink base image we are using for this service while working on this you also noticed and we decided to split the mirror bits charts m charts we were we were using in three parts with the htpd server on one part see a sink demand in another part and some mirror bits alone in in his chart his chart and I'm I've made this split yesterday and I'm working today on creating our own term chart to get this free chart as subcharts as dependencies so same stuff to do about that it's allow you also to work on see a sink on your side and to plan for the get sink inside your deployment with it and another development as we didn't receive any response from cloud flare and as it seems to be a bit difficult to contact same about their open source sponsorship program we might not be able to use cloud flare yet and we will fall back on digital ocean for now I think as soon as we putting this service in production target digital ocean mirror instead for the initial deployment you also mentioned the possibility of using your ovh ovh french provider as their bandwidth is free or not expensive or less expensive and we have credits absolutely so secondary mirrors that we can had immediately or after migrating everything from the current update center as you said we have ovh so that's it's a sponsorship to create to bootstrap but as you say the bandwidth the outbound bandwidth is free potential mirrors yes but digital ocean it's also going well for our relationship with them because it's c'est c'est asked asked us to try their data service I don't know how to say it no properly but it's it can be a good good good use for them so they bring other more feedback and it will be also another opportunity to contact them and discuss with them absolutely report to them to grow and nurture the relationship yeah the worst case is that will cost us a bit less than 100 of bucks in the worst case scenario per month which mean once we start using digital ocean we will have six months of available credits given the rate we have today with the current usage of that cloud so we have six months in front of us to start adding mirrors and spread the bandwidth for a data center so I propose that we still proceed in order to keep the whole VM away from AWS as Erwin mentioned we have OVH that could be a secondary don't forget that digital ocean cluster for us where we should deploy that mirror is in Frankfurt data center it's in Europe which mean we also have to think about can we have us-based mirror so we won't change the latency from for our us user today we are the current update center is only in us east one so that's why I propose we could also check with Oracle it's non mutually exclusive with OVH the more mirrors we have here the more distribution and spreading of the cost of the outbound bandwidth we have in the case of Oracle we don't have a Kubernetes cluster so we might need a bit more work because we need a VM we need a VM and a bit more work but it will be on a us data center and finally we could find other sponsor hoping that Cloudflare answers but right now yeah since we don't have any option from Cloudflare which provide free outbound bandwidth and they provide also the ability to create buckets inside the china internal network we could have covered mirrors on different areas in the world so I propose that we delay the china part for now and we focus on migrating away from AWS to Azure for the redirector and digital ocean as the primary mirror as we said we have this secondary potential mirrors and I propose we do a recap once we have made that change because it's already quite some work is there any objection on that part one last thing mainly for you Irvi you will have to drive this it's mandatory that at least two weeks before the effective change of the final update center to the new system after testing and everything that we publish at least a blog post and communicate because that mean we will change the public IP use for reach the update center and that will had a list of public IP where the user will be redirected to we need to communicate that because a lot of Jenkins user are behind corporate networks where they tend to have firewall that only allow some internet accesses so if we don't want to have tons of issues open after the migration saying hey I can't access the updates the Jenkins.io or I can access it but it redirects me to a blocked IP in timeout we don't want that kind of issues and we will have some even with the communication trust me but we can we can send them to the blog post so that's exactly they have a viable information so it's just a note that we will need blog post and to to have the list of this IP because in that case we control the mirrors that's the core of that new system so we control the public IP mostly is that okay for everyone of course some of this public IP won't be might change for instance a cloudflare public IP but in that case each company will have to select a mirror that works for them that well that won't allow us to balance through the HTTP redirector mirror system but that is they will have a back end solution because it's not for us to control that kind of thing but still we have to communicate about that change thanks Hervé for the report on the that was quite complete and the work is there anything else to add on that topic or things you want to clarify what cool so we can proceed to the next one Stefan back to you what is the status of the RM64 initiative for the public gates cluster I managed to close the pull request on the shared pipeline library that allow to build on a multi-platform and I used it for the wiki so now the wiki got a compatible image in docker for RM64 and md64 and I did migrate it to the the RM.pull so I can start back to migrate services to the RM.pull congratulations nice work thank you is now served with RM64 great work on that one which mean for the upcoming milestone I only have one request for you about that issue particularly is now to take a list an exhaustive lists to see which one will be migrated on the paper we should be able to migrate everything but some might be more complicated or we want to keep on intel but you need an exhaustive list so we can know when can we close that issue what is the definition of done is that okay for you stéphane oh yes of course that's there is an exception the matomo docker image does not need to be on that issue because it's a separated issue and topic to track once matomo is deployed whether it's a RM or intel then we can close the topic so it's only the the other services running on the public kates cluster and that's a scope for you c'est bon pour vous oui oui on liste les restes services pour migrer cool bon travail merci est-ce qu'il y a une question ou quelque chose pour clarifier sur ce sujet un souvenir le but ici c'est d'écrire la bille du cloud et par switcher pour RM64 pour la plupart de nos services publics on peut on peut attendre 10 à 20 % de la bille pour ces services ok juste un botel sur la c peut-être qu'une fois qu'on ferme ce topic on peut commencer à bouger certains de nos contrôles comme dans fracii sur RM64 pour d'autres coûts bénéfices parce que je ne peux pas rentrer sur RM64 next topic matomo encore use stéphane oui qu'est-ce que le statut sur matomo je l'ai juste pensé donc j'ai tout d'abord je l'ai renommé parce que c'est matomo et pas ma moto et et j'ai fait des cleans-ups la plupart étaient versionnés c'est toujours un progressement je pense que c'est il devrait être maintenant bien j'ai juste eu un problème sur le docker hub qu'il faut être bootstrapped pour ce nom matomo j'ai aussi une autre requête pour le drafter mais c'est terminé et maintenant je suis préparé pour pouvoir avoir le service flexible de ma SQL sur le sur le même area que le le post gris SQL 1 ce n'est pas l'area mais le même modèle template et j'ai travaillé sur ça maintenant cool, merci est-ce qu'il y a une question ou besoin pour la clé sur cette reminder le but de matomo ici est pour nous à avoir nos websites statistiques et éventuellement de l'aider de Google Analytics pour ce sport nous allons avoir notre système de poste basé sur le matomo qui a initié le processus de matomo a un financement de matomo depuis une année et demi dans l'océan digital avec des statistiques sur Jenkins I.O. ça ressemble bien donc c'est pourquoi en déployant cette on peut commencer à avoir notre propre stat et puis décider de la faire cool merci pour le travail et pour la réportation prochaine issue gdk21 je vois qu'il y a les deux les deux culprits ici pour cette issue ne rentrez pas Bruno et Stéphane qu'est-ce que le statut sur le gdk21 switch pour l'EA week je pense que c'est je crois que c'est weekly c'est construit weekly oui c'est vrai la plupart du travail que j'ai fait n'était pas et en demandant Stéphane s'il me plaît plus Stéphane vas-y c'est complètement faux oui je l'ai en fait déjà set-up le gdk21 mais pas sur cette spécifique version donc j'ai juste pris le temps d'expliquer ce qu'est-ce qu'il y a où on met les choses on avait un petit misunderstanding de ce qu'est-ce qu'il y a vraiment le le but de cette question parce que je pense que nous allons juste changer la la version et en fait ce que l'a été plané c'est de l'utiliser comme si c'était le la définitive 1 et d'avoir seulement la version dans seulement 1 place c'est donc on l'a fait et nous attendons pour un réveil maintenant cool donc presque prête de rouler cette et on a eu un réquest sur l'addit CLI je ne sais pas si on a eu le temps pour l'addit mais nous c'est encore oui en draft mais c'est ce que j'attends pour votre oui quand votre version sera switché pour une vraie réquest cool donc ça veut dire nous allons pouvoir traiter la version gdk 21 comme on le fait maintenant pour l'autre lgs gdk oui et je pense que oui bon oh c'est juste sur le le set-up maintenant et je pense qu'on a réussi un moyen de ne pas changer trop beaucoup quand ce sera la définitive mais nous verrons quand c'est venu oui ce sera juste un peu de l'addit de manipulation et la probabilité que Temerin change la convention nommée encore est je donc ne vous inquiète ça peut changer et vous devez manipuler les strings et c'est ça va bien où on est vraiment bon à ça maintenant c'est-à-dire mais depuis hier aujourd'hui euh ce nouveau gdk 21 est utilisé dans ci jenkin saio et on a réussi à tourner les billes jenkin score et ht hruns avec ce nouveau gdk 21 ea donc ça veut dire nous avons bombé de deux semaines par rapport à la version former maintenant on a des poupettes et l'outil est que nous espérons le cli de la date pour tracer une requête à l'open quand il détecte une nouvelle version de la gdk 21 donc une semaine et c'est-à-dire nous pouvons dire proposer ce succès pour l'update cli le système de tracé pour les images de jenkin score officiellement comme partie de l'initiative de la plate-forme saio donc cool merci gentlemen pour travailler sur ce sujet est-il qu'il y a une question besoin pour la clé pour ce sujet ok puis marque et hi nous avons tous une issue l'inux fondations status page redirect et pour moi un gira administrator login page setup je crois que nous deux n'avons rien à rapporter sur ces issues parce qu'on n'a pas travaillé sur tout est-ce correct c'est correct donc ces deux issues seront élevés comme les autres sur la prochaine milestone jusqu'à ce qu'on trouve le temps pour travailler sur ça que quelqu'un a essayé pour le 3735 ceci ah oui oui c'est c'est assez un truc qui est qui est qui est oui parce que ce n'est pas qu'il n'y a pas il n'y a pas un signe est-ce oui c'est un signe mais le premier ceci nécessite l'administration accès pour le gira qui n'a pas que les gens ont et les moins les gens nous avons parce que le gira d'instance est managé par l'administration pour nous et il contient beaucoup d'issues traqueurs et certains sont privés et réellement sensibles donc l'accès pour l'administration est très fermé seulement deux personnes dans la salle ont accès Marc et High le but est seulement l'infrastructure d'infrastructure ou des bords ou la salle de sécurité ont accès à ça le principal challenge qui est descrédit sur cette issue est que nous avons utilisé notre système d'administration de l'administration quand l'administration authentique nous n'avons pas utilisé le gira d'internaux et nous ne pouvons pas utiliser le gira d'ambérité SSO parce que le gira d'instance n'a pas managé le système de SSO avec beaucoup d'utilisateurs comme ce que nous avons donc nous ne pouvons pas utiliser le gira de l'administration nous avons besoin de l'instance standalone et enfin ce n'est seulement un problème de trouver le proper setup mais c'est seulement un problème pour l'administration de l'administration donc ça peut juste être un setup mais le but est d'expliquer à l'utilisateur hey, vas à l'account Genkin, dis-tu au public website pour changer le nom de l'administration de l'administration plutôt que de l'administration de l'administration que dis-tu, vas à l'administration de l'administration de l'administration parce que celui-là est désabri ça fait le sens ? ça fait le sens merci pour proposer l'aide mais pour celui-là ce n'est pas un nouveau pour l'administration d'accès à l'aéroport si vous savez comment changer cela sur le général Genkin de l'administration vous pouvez nous aider par nous guider ou nous pointer à la documentation de l'administration sur ce point je ne pense pas que votre cerveau devrait être en train de perdre son temps sur un issue c'est un let's say un issue de l'impact je préfère votre knowledge d'être expérimenté sur quelque chose plus valable d'être assez honnête mais s'il faut le faire si vous voulez c'est juste c'est votre choix je pense que le principal de cette rive qu'on voit ici je ne devrais pas toucher quelque chose très bien à un moment surtout quand je suis réveillé dans le système pas de problème pour celui-là vous devez être élevé sur le bord de Genkin que Genkin est une infrastructure pour avoir l'access donc ne vous inquiétez pas ce n'est pas un problème de temps ou de la confiance c'est un problème du processus dans la communauté où ça marche on a un issue qui doit être traité Stefan sur le prochain milestone nous devons s'improver les déploiements qui ont besoin des marches de l'alm pour que notre service soit déployé sur le cluster public de la communauté particulièrement le système de déploie de la rm64 doit avoir une affinité anti-affinité ou de quelque sorte juste d'être sûr que le workload est étendu entre les notes parce que c'est comme que l'assure manage Kubernetes scheduler n'est pas très strict pour un système répliqué et disponible et il s'agit de les packer avant ce qui veut dire si la machine virtuelle derrière le service a tous les réplications et que la machine est restée nous créons un outage mais il y a un moyen dans Kubernetes pour mettre le scheduler pour être très strict sur ça et c'est appelé anti-affinité c'est quelque chose qui dit un emploi une réplique pour dire Hey, ne scheduler sur la machine où vous avez déjà une réplique ici donc nous devons commencer avec des services simples qui n'ont que deux réplications et nous assurons que ensuite cela lead à deux notes sur le notepool avec une réplique sur each et ensuite nous lutterons à l'un avec trois ou six réplications pour que Jean-Kin Sayo ou Mirorbit parce que nous devons trouver un outil qui ne peut pas éviter avoir trop de notes qui sont épaissantes vous devez trouver six notes le but de l'AM64 est de réduire la bille du cloud pour ne pas augmenter par avoir plus de machines virtuelles d'accord mais oui nous devons réconstruire des choix de répliques surtout parce que cela a été fait quelques années auparavant et nous n'avons pas contrôlé que nous n'avons pas trop de répliques et nous avons un peu de métriques dans Datadoc pour nous aider en décider donc nous devons regarder la conception et voir si peut-être nous pouvons réduire l'amount de répliques pour des services de ces services est-ce que vous venez Stéphane ? je n'étais pas parce que votre face était électorée sur ce sujet donc comme c'est assigné pour vous je me suis dit ok c'est c'est c'est c'est c'est c'est ok mais si si vous avez besoin d'aide je peux je peux remercier avec vous quand vous voulez c'est mon proposer est-ce que c'est ok pour vous si je dirais et vous remercier avec moi avec plaisir vous mérisez je sais je sais je sais absolument avec plaisir aucune question sur ce sujet ? voulez-vous pour faire avec ça avant de faire trop d'immigration sur IRM ou ou c'est bien bonne question je propose parce que parce qu'on a besoin de faire ça avant de faire ça oui oui absolument ce sera safer ok demain on va avoir une relance de sécurité il y a plein de temps pour marcher sur ça et faire des trucs oui oui mais ici on ne craint pas c'est pas relative pour la relance de sécurité et finalement on a un problème donc merci Eric, pouvez-vous nous donner une réporte parce que j'ai vu que vous proposez la solution qui ressemble bien donc c'est construit sur c'est c'est gradle construit sur ce plug-in filling on pour le quest par exemple je n'ai pas noté ça sur le master branch mais j'ai proposé un fixe par removing c'est c'est un clean gradle task comme cd.thinkin.io est en utilisant une légende fmr donc il n'y a pas besoin pour nettoyer le workspace quand on est building un plug-in sur cd.thinkin.io j'ai proposé un request sur le plug-in j'ai 기다�é pour le feedback d'Aren si c'est un contact sur le workaround fixe ou il y en a plus de faire comme C'est un point d'envoi, je suis comme Damien, je n'ai rien à dire, mais oui. Et Gradel est de très haute utilisation dans la communauté de la communauté de Jankens. Il y a peut-être 10, 20 plug-ins qui utilisent ça. Il n'y a rien de corps qui utilise ça. C'est incroyable. Je ne vais pas. Oh non, non, non, non, non. C'est possible, on ne veut pas le dire, c'est possible. Oui. Mais merci d'aider ce contributaire, c'est vraiment utile. Ok, est-ce que c'est ok pour ce sujet dans le progrès? Ok, donc je crois qu'on a un peu de nouveaux sujets, mais pas trop. J'ai un qui a besoin d'un souci pour être créé, ce que je vais faire après cette rencontre. On a reçu sur le calendrier de team un alert que dans deux semaines, le CRL VPN va expirer. Donc, nous devons réunir le CRL pour 6 mois, comme nous le faisons tous les 6 mois. Donc, VPN, CRL, pour réunir avant, je pense que c'est le 20. Nous avons passé des sujets, le processus, quelqu'un ici peut faire le processus, quelqu'un ici. Avec l'administratif d'AutoCore Open VPN. Désolé pour Kevin Bruyneau et les nouveaux contributaires, mais oui, c'est une bonne chose. Le plus important est une nouvelle enquête, une nouvelle enquête de pool, qui a créé une nouvelle enquête de VPN Open VPN, que nous devons déployer dans Poupet. L'update CLI a fait la plupart des elevations ici, qui ont été créées. Et a créé un calendrier alerté dans 6 mois pour la prochaine expiration. C'est le plus difficile de ce projet. Donc, un souci sera créé, si quelqu'un est intéressé, il peut le prendre. D'ailleurs, je vais déployer des oeuvres comme quelqu'un randomement, et ils vont être désignés volontairement pour le prendre dans 2 semaines. Je l'ai aussi vu, et j'ai besoin de l'aide des personnes intelligentes sur l'email, mais j'ai eu une notification, ça ressemble à des e-mails qui refaitent le SPF. J'ai eu des commandes par la team, donc j'ai ajouté l'issue sur l'upcoming milestone, et j'ai besoin de l'aide, parce que mon knowledge sur l'email, et le SPF, et tout, est close à 0, même négative. C'est le titre de l'issue, c'est tout. Je ne sais pas ce qu'est le problème, si c'est vraiment un problème. Je ne sais pas ce qu'est le problème, donc j'ai besoin de l'aide des personnes intelligentes sur l'email. Ça ressemble à ce que vous êtes désignés, Stéphane. Je vais ajouter l'issue, et si quelqu'un peut le prendre, je ne sais pas. Nous allons regarder le nouveau issue que nous pouvons avoir reçu avec triage. Nous avons le suivant, « Genkin's Plugin Bundles Proprietary Dependency ». C'est un issue ouvert par Daniel. Il n'y a pas d'action que nous espérons, mais pour que vous le connaissiez, le but est d'abandonner la distribution de ce plugin, parce qu'il n'y a pas d'action que nous espérons, et c'est d'abandonner la distribution, parce que ce que nous avons prévu, c'est d'abandonner la distribution. Il n'y a pas d'action, mais il n'y a pas d'action, et il n'y a pas d'action. Il n'y a pas d'action, mais il n'y a pas d'action. Nous avons le texte��도 à l'écart de la distribution. La distribution est plus ordinaire, mais il n'y a pas d'action, et c'est l'objectif de l'action, On a fait ça avec le service de la fondation de la team, on a fait ça avec plusieurs vendeurs. Normalement, c'est juste qu'ils ont fait un erreur, et tout qu'ils ont besoin de faire c'est qu'ils ont correcté leur erreur et qu'ils ont switché d'utiliser des composants d'open source. Donc, encore une fois, il n'y a pas d'action, c'est plus pour l'information, et l'issue doit être fermée quand le plug-in Genkin ne va pas servir le plug-in anymore. Donc, je suis en train de regarder à nouveau l'issue. On a une permission de githée, mais c'est pas dans l'infrastructure, donc il n'y a pas besoin d'adverter ça au milestone. Et Pkg Origin Genkin Sayo est en train de travailler sur les updates, Genkin Sayo. Ce qui veut dire que la dernière grosse est Cubanet S1.26 Upgrade, parce que Digital Ocean va arrêter de supporter 1.25 en octobre 2023. Donc, en septembre, nous avons besoin d'adverter, je pense que la dernière journée est 27. Nous avons septembre et octobre. Donc, mon proposo est pour maintenant, pour garder ce point sur les priorités, nous focusons sur l'update centre, et là, j'ai 64. Et en septembre, nous allons, encore une fois, vérifier le statut et voir si nous avons besoin d'adverter en septembre. Est-ce que c'est bon pour tout le monde en train de planifier? Parce que j'ai besoin de Hervé et Stéphane pour focusser sur leur priorité au top level. Je vais travailler un peu au milieu de septembre, et je veux focuser sur l'artifacto-riche jusqu'à la fin, et sur l'article cloud, qui nécessite les trois de nous marcher ensemble. Donc, cela devrait être délégué de la fin de mois, si il y a un objectif ou quelque chose que je n'ai faimé. Ok, il y a une autre raison qui va vous aider à être convainc. Nous ne voulons pas commencer l'upgrade Cubanet S1.26 Dès que nous avons fixé le problème de l'affinité anti-affinité et de l'availabilité à l'availabilité de la pod, d'ailleurs, l'upgradation du nôtre pour le nôtre Cubanet S1 aura des services que nous n'avons pas besoin. Et ce upgrade sera une bonne preuve que maintenant nous n'avons pas ce problème. Ça ressemble bien? Ok, donc, vous avez des nouvelles sujets? Ou peut-on discuter et dire nos nouveaux contributaires? Pas de nouvelles sujets? Ok. Donc, c'est temps pour la question avant que nous fermons la récordation et la rencontre. Ok, donc, hi. Hi, je suis prononçant votre nom correctement. Hi, est-ce que c'est correct? Oui, c'est plus amusant. C'est vrai que c'est hi. Donc, j'ai commencé l'affinité du nôtre pour l'affinité de la pod. C'est une bonne opportunité pour nous d'essayer de penser sur la fête actuelle. Nous devons penser sur des problèmes qui peuvent être détenus sur l'infrastructure parce que parfois il y a des accesses nécessaires. Et le fait que nous n'avons pas toujours le temps de monter et de suivre la personne qui propose une récordation pour l'affinité. Parce que, vous savez, les étiquettes, comme le français dit, ouvrir une récordation pour l'affinité et de suivre la personne qui propose une récordation pour l'affinité de l'affinité et de suivre l'affinité de l'affinité et de suivre l'affinité de l'affinité de l'affinité. Donc, c'est un bon moment pour commencer en discutant une question d'affinité qui explique que vous devez essayer de résoudre le problème avant de vendre les requêtes. C'est pourquoi, je propose de nous joindre aujourd'hui parce que c'est possible de nous aider dans l'infrastructure de Jenkins dans un projet en général. C'est à vous, hi. Mais ici pour l'infrastructure, ça veut dire qu'on a tous à brainstormer et proposer des issues que vous pouvez travailler sur. Je n'ai pas eu le temps encore de penser sur ça. Donc, je propose, si c'est ok pour tout le monde, que si vous voyez quelque chose que vous pouvez prendre sur les issues et d'updater CLA, on peut commencer à Write these issues on the milestones so high could start get working on one and that mean we need to give him clear scope on what is expected to not let him alone. Marc got tens of them. No, I don't have tens, but I have one. And the one I have is actually an action item from the board that is been piled on Kevin Martin's back. So we have and high I assume that you may in fact be native speaker of one of the oriental languages either Japanese, Chinese, Korean. I mean, I would be really be proud of myself if I could actually speak Chinese or Korean, but I can only speak Vietnamese. Ok, Vietnamese, all the same, that's great. So the reason I ask is we've got a documentation site for Chinese that needs to be retired. We need it redirected to the English site. And so there's a learn about the infrastructure to figure out how is the current thing hosted, where is the point where that redirect needs to be inserted and what are the alternatives for doing the redirection. Because what the problem we had is we get we had a very active Chinese community several years ago that did a bunch of translations but they've gone inactive and they've not been active for two plus years and our documentation has evolved since then in very important ways so that now the installation instructions for instance that are translated to Chinese are simply wrong and so we need that better to point them to correct English documentation than completely wrong Chinese documentation. So what we need is a redirect and but the challenge is where does that redirect go and the parts and the pieces of the system that you need to find to do the redirect are probably more complicated than it sounds like it's just a simple redirect. So Damien are you open to that or am I already asking for something that a new contributor cannot possibly do? You might need guidance at least some entry point if you are interested of course to the let's say the Kubernetes resources which are public that we use for serving the websites at least on the back ends but that also require analysis on the Fastly port because I don't remember if Fastly is caching the Chinese website I believe not because Fastly is not projected inside the China network but that need to be checked Right so the one of the complications then I like where you describe one of the complications is some portions of the Jenkins websites are delivered by a content delivery network that is donated by Fastly Can we expunge them or I don't think we need to expunge them simply redirect so that instead of going to the Chinese URL it goes to the root URL so instead of x Jenkins.io slash ZH slash ABC it would go to slash ABC that kind of thing right so redirect and sprinkling up Chinese here I suppose well yeah you given my non existent Chinese yes you'd have to have at least my non existent Chinese skills and that meaning you'd need to be able to run Google translate if you need to figure out what a page might be saying it sounds like yeah there you go another technique translate.google.com is my friend on those kind of things alright that's a proposal you have the right to say no it's only a proposal that's really important and it's just one of potentially many ideas that just happens to be an idea that's on my list yep and Damian cited did you ever try a big ally hi oh yes I mean I have done plenty of directions of my old companies domain to anyone so that would qualify me as well right however I believe that Bruno was asking a different question hi he was on a different topic that he just he just asked this redirect sounds like it's yes well so the redirect is a very different thing what Bruno was changing redirect is it sounds like you're very qualified update cli is a tool that we use to maintain to automate the updates of versions that are specified in our source files for instance no js releases a new version and our configuration is code definitions need to be updated to use that new version yeah so this is what update cli is and I'm assuming you probably haven't seen this before because it's a relatively new tool yep my predecessor wrote this tool and now is working at use usage of that tool that tool has been adopted since then by a ski doctor project by the k3s project on sues and some internal sues project as well we are using it because we run the update process compared to renovabot or dependabot it's not a cloud or external system we run it inside our own instances but since it's a relatively new tool configuration is a bit verbose and sometimes the ux might be a bit word because it's new and needs users to give feedbacks and contributions yes Damian do you have any idea of what could be updated with update cli which hasn't been updated yet what is not tracked is there dependency an obvious dependency that we would need to update with update cli no you already fixed all the obvious one so I need time to search and look but that means the whole team needs to search if you see obvious dependencies that are not tracked that could be a way to at least track them with issues and propose to high by defaults and if high you are not interested anyway that's useful because we could label these issues as newbies for at least the actoberfest and we need to be careful because sometimes that sounds easy and just a track in the manifest and at the end it's just a nightmare like the last discussion we had I forgot when yesterday 2 days ago about the fact that we need the same version of a tool between cian infra that start to be not so easy and mark you cited node which may be the most complicated thing to update because you don't know what kind of warms your opening when upgrading a node sorry hey do you have any specific competence do you know terraform or do you know packer what kind of technology do you already know for us to drive you in a way that you already knows a little at least you're addressing me stefan yes yes I've worked with packer before terraform, perimus ansible and I feel like I've skimmed through almost most of the DevOps stack nowadays cool awesome so we can fire everywhere so another candidate idea possibly not so much infrastructure as development but needs the skills that I just mentioned add debian 12 support to oh dear I forget which one it is is it the agents oh no add debian 12 support to the packaging tests so we have a test kit that uses ansible and molecule to execute automated tests of our installers and those automated tests of the installers have to have an operating system on which they execute and thus ansible configures that yes this packaging repository is the thing so hi if you're interested it's this isn't strictly an infrastructure thing it's rather a testing that the it we use this to test that we are successful in our desire to keep our installers regularly running actually that is hide within my interests my spectrum of interest ok so so so this one is one if you if damien if you look at the list of pull requests there is a pending pull requests add debian 12 oh and now it's passing I had not seen not seen this one even pass so so this one is is already a beginning for consideration but there are some problems hiding in it not hiding there are problems stated very clearly in it and if you're willing to learn about molecule this tool that we use to molecule is an open source thing that we use to build and run these tests that would be great another for your consideration I I sorry no problem sorry so on my side I will think about I don't have some immediate I have some in the back of my mind but I want to check them to see what will be required for you to get started so if it's ok I will come back with issues next week during the upcoming team meeting it's not mandatory that you attend every week if you want you are welcome it's open to everyone as we say it but I don't want you to feel any pressure if you start a subject and are not able to learn it that's not a problem you are here because you will and you want to spend some time on this but there is no criteria of oh yeah we will trust him only if he succeeds no no pressure on that part you are free it's your time you do whatever you want so if you help that's really cool and useful if you cannot finish that's not a problem you will have learned something on here and the process is already valuable for us just thinking about what we can share with you is already a good thing for the project for helping us try aging and onboarding new people so in any case it's not a waste of time for no one so don't feel no pressure no anxiety on that and do not hesitate to ask for help or information so you have a few entry points here we let you read these elements try to understand and select one no problem take the time you need and let us know in the channel or by adding a comment on one of the issues with your githubander just one short answer I'm trying to click on the links that you're saying oh no my it works now the hackMD.io link before I mean it's stuck at the human verification yeah but it goes through now ok cool it's like me with a ball or something initially ok oh that's so we use that tool only for collaborative notes but then these notes are publicated in github Jenkins and fraud documentation and on the community forum at community.genkins.io every week so that link will disappear in a few days it's only a temporary tool until we publish the notes ok so is that ok for you you take the time you need to watch the topic and you can get started on the one that interests you and we will report with eventually new issues next week whether you are there or not we will just report if you choose a topic and then we will also continue working on selecting issues with the October fest in mind ok ok just one short thing yes sorry I think I forgot what I was about to what I was about to ask but no problem you can you can ask on the I synchronously on the text channel I remember I mean I assume that I would need some guys to work on one of the items right here I would need to ask for some sort of access to the infrastructure not at all you don't need most of the code is public and the most important part is that we need to select issue that you can reproduce partially or totally and then you can once you have started the discussion on I want to solve that problem and I'm going on that direction you can then proceed on opening a poll request if it looks like a solution and we most of the time we have CI feedbacks that you can see publicly so everything being public you can get started the goal is not having to create accesses to any kind at any moment if we see that for whatever reason that we forgot we discover who you might need then we will discuss this all together including you but right now you don't need any kind of access except eventually a github account so we can interact inside the issues where you will propose on the poll request yes I do have one and in repo you got a readme file and most of the time in the readme file you can find information on how to proceed on your machine to test your poll request beforehand and even before opening the poll request most of the time you can most of them most of these readme are good first issue too since most of the time old and they need updates so if you heading readme and you see incurrence problems or outdated link don't hesitate to edit them it would be great alright thanks Nauf and you can also help us to know what is not explain what is not correctly explained for new commerce or someone discovering the specific repository alright got that thanks alright I think I'm good on my hands now cool I think that's all I've stopped the screen share do you have any one last item you want to to mention or we can close the meeting one two three ok so see you next week everyone I'm stopping recording and have a nice day bye bye