 Bonjour tout le monde, bienvenue à l'infrastructure week-end de GeneKins. Nous sommes le 3 octobre 2023. Aujourd'hui, à la table, nous avons moi-même Taman du Portal, Marc White, Stephen Merle, Bruno Verrarten et Kevin Martin. Nous ne pourrions probablement pas pouvoir attendre, il a un point médical. Ok, on commence avec l'annoncement et des éléments plus grands. Donc, le week-end de 2.426 est out, à moins le monde, les packages et l'image du docteur. Je ne suis pas absolument sûr pour l'image du docteur, parce qu'il y a un problème avec les machines virtuelles sur Azure, depuis plus tard aujourd'hui. C'est un deuxième élément que je vais attendre immédiatement. Je ne suis pas sûr d'y comprendre, donc vous ne serez pas sûrs si les containers sont construits. Oui, je crois que l'image Linux pourrait déjà être touchée au docteur, mais les images des windows n'est pas encore là. J'y crois, je l'ai même vérifié. Je vais vérifier mon autre écran. Je peux vérifier, parce que j'ai un processus. C'est encore construit. Les agents de contrôleurs et des agents de CCH sont attendus depuis minutes et heures. Donc, l'image du docteur, Linux, ok, les windows sont attendus. Je le vois. J'assume que le autre item de change log sera fait plus tard. Un change log est déjà fait. Il mérite et l'a publié. Merci à Kevin. Il a fait un excellent travail. Et ensuite, j'ai fait quelques extraits après. Le change log mérite. Je crois que ça devrait être visible maintenant, parce que ça mérite 10 ou 15 minutes plus tard. C'est un bon travail, les gens. Oui, c'est un bon travail. Il y a deux grands changements. Vous l'avez écrit, donc j'assume que c'est pour les infrastructures. Oui, l'une est la removal de prototype JS. Le library JS que nous avons commencé à la removal en mai 2023, est maintenant en train de se retirer de la week-end. Nous avons été parmi les plugins de plusieurs 100, pour faire sure que ce soit retiré de eux. Et une spéciale grâce à Basel Crow et Tim Jacomb pour un excellent travail pour se retirer tout au long de l'end. Il n'y aura pas de week-end, il ne sera pas visible sur ci.jankins.io jusqu'à novembre 15, avec la prochaine phase. C'est pour l'infrastructure, nous devons être careful de l'infrastructure.ci, qui va utiliser weekly. Et nous allons avoir weekly.ci.jankinsio updated. Donc, c'est nécessaire de checker, les cas de minimum de utilisation sur cet instance public. Et qu'est-ce que la deuxième major change? C'est aussi... Container images qui ne spécifient pas le JDK sont maintenant utilisées en Java 17. Donc, si vous utilisez la plus tardée, je sais que les amis ne veulent pas que les amis utilisent la plus tardée. Mais si vous utilisez la plus tardée, vous allez avoir Java 17 maintenant, plutôt que Java 11. Parfois, si vous utilisez Alpine, ce qui est une plus tardée, c'est la plus tardée, vous allez avoir Java 17. Si vous utilisez Slim, vous allez avoir un container Java 17 maintenant, si je vous le rappelle correctement. Donc, si vous vous demandez pour un container et pas spécifier le JDK dans le label, vous allez avoir Java 17. Ok. L'important, c'est que ce change sera reposé à la prochaine release LTS. Quelle sorte de LTS c'est, parce que ce change n'est qu'une reposition d'image doc. C'est-à-dire pour tout le monde? Je pense que c'est excellent. C'est-à-dire que Kevin doit être sûr que quand il crée le change log pour la prochaine release LTS, et la prochaine release LTS devrait être disponible demain. Je crois que c'est la date pour ça. Il va avoir des prévues de la prochaine release, et on doit être sûr que c'est inclus. Mais il est déjà dans la semaine, donc il doit se remercier pour pouvoir sortir du week-end et couper le change log. Cool. Thanks for the details. Congrats for that work. Is there something else on that new weekly? Those were the two big ones then, yeah. There certainly are others, Pour l'infrastructure, nous n'avons pas été impactés par ce changement parce que nous avons déjà stické à GDK17, donc, oui. Mais ça veut dire que nous allons bientôt pouvoir commencer le GDK21 stické. Donc, un second annoncement. Ça ressemble à ce que nous avons cette semaine sur les issues Azure VM. D'hier, le VPN a été downé. Le premier site, c'est qu'il y avait un upgrade automatique grube qui a failé. Un failage grube. Donc, le système a dû rouler le bootloader. Et quand nous venons de la machine de marche, nous devons finir l'upgrade par nous-mêmes. Mais ça indique un issue d'underlay. Ce n'est pas un issue sur le système Azure. Mais ce n'est rien sur le statut de Jenkins. Donc, on a pensé que peut-être que c'est seulement notre VM. Mais depuis cette nuit, depuis dix heures, plus ou moins, l'allocation des machines virtuelles est parfois sur notre subscription. Mais seulement le Windows 2019. Donc, Windows 2019, l'allocation de VM est flacquée. Ok, je suis désolé d'interrompre. Mais par exemple, j'ai eu un bâtiment qui a été installé depuis 21 heures parce qu'il n'y a pas de machines virtuelles. Est-ce qu'il est linked ? Oui, c'est flacqué. Donc, ci.g, trusted.ci.g, answered.ci.g are impacted. So, yeah. Since 24 hours. Yesterday, we delivered a new version of the Packer image. So that could be related. But the issue here is that some of the virtual machines are working and building and some aren't allocated with Timeout from Azure API. So that's why there is something word here. It's still not sure the real root cause. So, gotta check. But yeah. That has the kind of impact you just described. We're waiting for Docker-Windows. I believe we should be able to open a status page after that meeting and had a message on top of CI Jenkins.io To let you know. Which explains why the container images for Windows are slow to build because it takes time and it reached Timeout of the builds. Need status Jenkins.io Message Container images Slow to builds That's all for the Azure Virtual Machine Issues. Is there anything else on that topic? Do you have question or action? No. Okay. So Java 17 is the new default GDK on container. So that's why we... That's the second major change. Is that okay for everyone? One last announcement then. I will replace the third point. Is that we had services done during a few days last week. So I don't remember. We had PluginLscore Incremental Publisher and Reports.Jenkins.io So the third one had impact on the plugin developers. The reason is the person currently speaking. We have wrote issues with postmortem but I wanted to put it on top of the items just to let you know. Because I've learned a lesson on the way Kubernetes manage links between objects when you want to delete some objects. So let's say the summaries first remove the link and then do a second upgrade on garbage collect object and don't do both on the same upgrade. That doesn't work. That was an opportunity for us to improve the monitoring because these services weren't. That's all for the announcement. Do you have other announcements? Ok. So let's have a look at the upcoming calendar. The next week is next week as usual. We will be the third ten. I don't know when is the next LTS planned. 2.414.3 2023 10.18 18 40 42 4.14.3 Ok. Thanks. Release candidate tomorrow. No, no, that's a different thing. Or no, is that? No, no, you're right. Release candidate tomorrow. Nice one. Jenkins security advisory. We don't have Jenkins security advisory. Right. Cool. Next major events. So we had 2 DevOps world where Mark was present. As far as I can tell. And next is in Santa Clara, October 18th. Santa Clara. DevOps World Santa Clara. October 18th. 18 et 19, yes. 1. February 2024, as usual. And they've actually announced the specific dates, right? I think it's four and five or three and four. Yes, that first week. Three and four. Cool. So an opportunity to meet the team member. Any questions so far? Ok, so let's start with the work we were able to finish. So we can thank our survey team for solving that issue about one of our plugins a contributor maintaining a plugin was complaining that despite them changing everything to have the GitHub documentation shown on plugins Jenkins. The old wiki documentation was still used. So no abuse error. No build error when the plugin website is is changed. By by phishing informations we were able to check that the update center generator has an override capability. Most of the time it's because security or emergency infrastructure changes and that plugin was inside these overrides. So whatever value was set up for the plugin was overridden every 15 minutes when the update center index is generated which forced the documentation to be used to the wiki URL instead of the auto detected GitHub. So after learning that mechanism and removing the override we were able to update the whole system and now everything is going fine as usual. And that override file is inside the update center? Absolutely. Let me open the issue. Ok, no need. I'm sure. That's just quite the surprise. Yep, that's the wiki override.properties file inside the resources there and here we have a lot of plugins. So if you see a plugin that looks like it it follow every recommendation and still the wiki is still shown on plugin Jenkins IO then check if the plugin isn't overridden here. So thanks everyone. So the plugin will score one of the free services that was down. So I've tried to write down a post mortem after fixing it. So thanks Adrien Le Charpentier for pointing, for removing this to our attention. So thanks Adrien Le Charpentier for pointing out how to fix it and in order to avoid reproducing that issue we had it monitoring. We had monitoring but the monitoring was checking only a redirect from HTTP to HTTPS. And the thing is that with our Kubernetes system the funny part is even if the backend is down the redirect. So yeah we added a probe. So now if the service goes down for whatever reason we will be alerted and we will be able to avoid waiting 5 days. By the way that could be the reason why the previous plugin not only had an issue with the documentation but also was there in illscore probe because I believe if they change elements during that outage I'm not sure how is the system working on details but that was bringing to Adrien attention that maybe we need to recalculate a few plugins. I don't know what the frequency is but that could be related or it's just me, my Jain. Any question? Ok, same thing for rating the Jainkins.io So that one was fixed and Airbay was run. Same reason exactly. So Airbay was able to add also same fix it wasn't monitored so now it's fixed and nothing else to add here we should be able to catch this issue next time. We had a user who lost access to their repository so like most of the time they had an issue with their Maven configuration in that case that was a subtle one. I think it's worth sharing it with you folks. I'm not really sure of the root cause but we use two different ideas on the POM and POM parent and example and documentation for plugin developers we tend to have what I call the download repository so the repository on Maven POM XML file that describe where to get the build dependencies and the project dependencies and that one tend to be marked with an idea of repojenkins.io.org I'm opening an example here you can see we have repository with that ID and the POM parent that helps developer not having to care to define all the dependency management is defined to Maven the Jenkins.io.org I believe it's something from the past that has been kept because change on parent is not an easy thing some plugins are update often some are not updated since years so changing this one is way more difficult but I realized that since they wanted to publish a new plugin manually which I consider myself a bad practice we have a CD process for that and the human shouldn't be responsible for pushing and deploying a release but in that case you have to be careful in the way you configure your settings XML I believe the documentation is up to date because they follow carefully the documentation so I'm opening the documentation page which say you have to configure your server XML with the same ID so the credential shown here will match however there might be a word behavior where Maven check the URL of both repositories and say oh it's the same and sometimes repo is forced and sometimes Maven is forced so my proposal is maybe we should update the documentation here and tell them oh don't use the settings XML from Artifactory as it but instead change it copy and past this one and create a second server and tree with the same username same password but you must use repo.jankinci.org as ID for a second one so now but I thought that maven.jankinci.org was the preferred just because of the history that repo is is used in URLs but not in okay I'm clearly not understanding I'll need to ask you some more questions later Damian okay no problem but I believe we should be able to improve documentation here I'm not sure if it was the real problem of the user they say this worked after I gave them this instruction so maybe it's a misconfiguration of their plugin but for me it will be worth it to say okay we need to set up credential for both the takeaway for us is that any developers with that set up will start choosing the LDAP account to authenticate the request to Artifactory so in any case that will be easier for us to track the usages of these developers so in any case it's a good thing does it looks good for everyone that means that the LDAP will start to get more more work but that's not a lot and we still have so much more gene for this yeah that's all but that was worth sharing okay next issue unless you have a question no question Archive Genkin Sayo that service is now running on Digital Ocean instead of Oracle Cloud virtual machine has been moved I haven't checked yet the impact on the outbound bandwidth cost on Digital Ocean itself I haven't monitored this one we can afford 500 GB outbound per month before starting to pay for it that's the threshold so yeah worth checking because that is a public web server so that one has been done properly that was really easy the most complicated part was to find the areas where that machine was accessed through SSH but yeah I don't have anything else to add on this one everything including Runbook has been updated is there any question on this one no I'm pleased to note that the cost from Oracle Cloud for October 1 and October 2 were zero dollars yes which means we have cleaned up properly thanks for confirming next issue is a minor one TFSEC which is a static analyzer for Terraform files started to show warning and important messages that the organization developing it was focusing and shifting the effort to trivie their own made commonline static analysis which cover the same features as TFSEC but also had other things it can scan container images and chart and bunch of stuff so that's why we decided to move our Terraform analysis maybe we could benefit from the other features in the future but right now it was only for Terraform project so the shift has been done TFSEC has been removed from all of our assets and trivie is available instead any question ok we had the account issue but the person never answered so thanks Mark for taking care of this one now let's move to the work in progress issues that were started and we that on which one we worked or at least should have worked during the past milestone and for each one we will decide of course if we continue working it on the next milestone move them to backlog or stop working on them first one access to get up packages in the plugin that one is interesting so we have a plugin developer who open to work at Redats to build a plugin that require Java dependency on the plugin a Genf file which is not on Maven Central or public Maven repository but on a GitHub packages repository so of course that mean you cannot access publicly without a GitHub access token to these packages as you can see on my screen here the documentation is really useful they say add dependency et cetera but only inside a GitHub environment with a GitHub token of course when we build on CI Genkin SIO or when a contributor builds on the machine they cannot because it requires and mandates having a GitHub account so they ask the question what would be the solution and the possibilities one of the possibilities they ask what can we move these dependencies to the Genkin's public artifactory which technically will solve their problem we tend to have that policy that say hey if we host it we have the code and the properties and the legals because if we need to update an emergency for security or legal reason the binary that mean we need to have un actionable or lever somewhere my belief my belief is that they don't use a public artifactory or maven central for legal reason or license reasons so that's why they are using GitHub package so that's why I've proposed the solution but yeah there has been discussion, no feedback since 4 days to use the same pattern as other plugin developers they can store the JAR file on the source code of their plugin so it's their responsibility modulo the license but I mean we have license check and if someone from the security team on Genkin's discover that there is a license thing they will just remove the dependency and stop the plugin and if they have the JAR they can control the update process so a given maven file like the callis callis plugin can reference the local repository so let's wait from a feedback from them if they don't give us a feedback end of milestone we close this issue next week is that okay for everyone? yes I'm the thought of checking binaries into git repository scares me but done it before yes it can work that's what we have to do same for me but I mean that's a solution for them if they want to continue building their plugin as team propose they can also try to publish on maven or other public repository or host their own next issue goss the test framework alas server spec for our packer images in order to define the images for the virtual machine container linux and windows virtual machines that we use on our controllers so we have a contributor who started to help us which trigger a lot of exchanges between stefan high and that contributor which led to a let's increase the test coverage by shifting to sanity checks that are run on part of our provision in script to something else that say hey let's connect as Jenkins user and check that tool, that tool, that tool this acceptance test would use goss which is a framework that can work on both windows and linux that you can use with a wrapper named digoss on containers so that's really easy with this one it's already installed on the machine so we could run a set of tests which are easier to maintain than bats or power shell pester or whatever framework and it has been changed and updated by stefan to cover linux we cover all gdk and 3v so now if i understand correctly stefan the next step will be to move all the sanity check to that acceptance test orness except if we have some specific ones or most of them at least yes ok and start to do the same on both windows and linux exactly the main challenge sorry go ahead that's probably what you will say because i think it's in the comment down exactly good catch the main challenge here i say challenge but that's a big word is that that means instead of we want only check that we can execute the binary we will run the binary through goss and check for the version or something from the std out for instance we will check that java-version answer 11 and soon 17 by default that opete bin java gdk 21 bin java answered 21 on its output which mean each time we have an automated pull request that update dependency we have to also set it up so that it also update the goss checks otherwise it will fail that's the challenge it's more temporal challenge than really a technical challenge and that's why using the new key in the goss file with i think the step is exact make it easier to track in the agacy line manifest believe we have an example here and let's see what does the file looks like let's see the new file now is here so goss testernes is described with that yaml syntax we are only testing for command we can test for files, users, packages and other technical elements we have files in 1932 true I forgot about this one that's a way for us to check that it's a contract between us the builders and provider of the images and the consumers at least on ci is the tool available for all the architectures we support including the rather obscure ones like system 390 I haven't checked but it's go long so it is really easy to build it great, excellent at minimum it's available for ARM all the ARM platforms and the Intel platforms Linux ARM and Intel is out of the box Windows and macOS both for macOS Intel and IRM is still experimental you need to set up a special flag or environment variable for what we do commands are easy and work out of the box so Stefan do you think you should be able to continue working on this I will try of course yes so we'll move this to the next milestone I on my side is also working on the windows ghost part for further things so yeah let's continue working with this one we have an issue Oracle cloud project and tools so that was a joint work from Mark and Hai so I have the pleasure to say that as Mark confirm with the billing we don't have any resources running on Oracle cloud so now the two last steps are administrative we need to ensure that all the builds are paid and that the account is closed Mark and Hai still have access to the account but the rest of the SSO integration with Azure all the leftovers have been removed the action item is now actually in Oracle's billing team they made a change that made it so I can't actually pay them for three of the invoices but then they said please pay us and I said I can't and now they're working to try to find a way so that they can make it so that we can pay them but it's their action item worst case they could sponsor us and say hey it's ok they could just say it's a donation I'm ok with accepting their donation as well the time spent with the accountants might be worth depending on something else that is a financial decision they get to make that is not our concern they get to decide how they spend their financials yes absolutely so is that ok Mark do you approve that issue on the next milestone considering that's only for reporting yes yeah no objection to put it in the next milestone and at some point we may close it just to get it out of our list even if it's not resolved because it's outside of what any of us can do anything about yep I propose we had a full milestone before taking that decision is that ok for you yes I like that the next topic the next topic is speed up the docker image library to create push tags at the same time that one was put on hold during last milestone am I correct I was sick I'm sorry do you feel like you could work on this during the upcoming milestone Stéphane I will try yes ok so I'm keeping it on the next milestone nothing specific to say here we need it to be sure that we publish images faster with less complexity that's cheaper yes that should be cheaper because less builds good point next issue remove account request field from Jira login page I need help from Jira administrator or someone who know their way on Jira I understand the issue it's changing something on the popup but I'm too scared to try things on Jira that want to break everything so yeah I will need help on that topic I'm not sure who I could ask for help on this one I believe Tim has access right Daniel Beck Tim Jacomb are the two people that I would entrust that one to Basel Crow could probably do it as well but he would have to watch while you did it because I don't think he's actually a Jira admin but he's done Jira admin before ok I will start with Tim since he might had issue but it was one month ago so I will double check with him then so I put it here thanks Marc Stéphane can you give us heads up on the HA properties of the replicated services yes it's between 2 différents it's anti affinity on one side and pod disruption budget on the other one sorry we are almost there I think there's only 4 services to go and then we will be able to go back to the IRM64 migration to remember it's the way we handle the repartition of the pods between nodes and to make sure that when we recycle or reboot or kill a node in the services under it and the pods still have pods somewhere else to handle the load and we are not out in the blue that's anti affinity and pod disruption budget to make sure of the anti affinity and tons of view test because we love it yeah I'm driving Stéphane to dream about write the test then write the code then refactor even for elm charts or have nightmares that's night too test driven development is a thing keep going that's exactly that you wrongly spelled nightmare driven development but yeah who am I to judge thanks need help from Tim or Daniel we almost done 2 free services left we saw interesting interesting let's say behaviors because the way we have set up the HA we had it anti affinity that say hey I'm a replicated service and I see there is already a replica running on that machine which tend to trigger quite often the autoscaler so combine with the pods the pod disruption budget that say hey you must always have one running pod the behavior we tend to see is that it auto-scale up the virtual machine start a new machine start the pod on it and only when it started then it remove one and then everything start to to change which keeps the high availability as we expect but our deployment takes a bit longer and we have to wait 10 minutes before the autoscaler dynamically reschedule everything properly and scale down to the minimum amount of machines so that mean the auto-scale up and down on Azure is still really efficient and dynamic second funny thing is that we discovered that some of our charts or installation or services we install tend to have either too much replicas which we decreased for instance jenkins.io because we are fast in front so we don't need five pods running to our absolutely okay we decrease these usages and some as bleak and genix ingress tend to have different kind of deployments with different roles the kind of genix you have one reverse proxy deployment with two replicas for high availability and we have a deployment of back end web server that serves some 404 pages when we have a 404 and all of this one are currently mixing each other with the anti-finity so we have three pods which mean we have three virtual machines under we can't shrink down the usage it's temporary it cost a few bucks but no more than 100 bucks per month on the projection I made we will fix this element but we weren't aware of these behaviors so we need to fine tune in order to optimize the resource consumption on the other hand when we have always free machines that mean the auto scale up scale down we started to see at the beginning won't exist anymore because we will always have a third machine available when deploying a service so that's for outcomes yeah if we have two records couldn't maximum if we have three or four it's dead with only three nodes it's just because the count of pods we ask for is too so yeah that one is doesn't look like much but we learned a lot on the behaviors of the scheduler and the behavior of our application as a reminder as a reminder we need this issue to be finished before starting the work for Kubernetes 1.26 upgrade any question no next one then Matomo we were able to work on Matomo so the MySQL instance is now started created and we have an empty database running inside our private network on the public cluster and the Docker images is now built for and released for both Intel and IRM64 and it's built so the next step now is to start bootstrap the first installation on the public cluster with the custom image and the database we created so that require using and setting up the Elm chart, the official Matomo M-Short and installing it and then we will start being able to run the service so we are making good steps and I believe we should have something to test next week is there any question mage ok ok, so then next step M-Short initial installation artifactory bandwidth reduction option mark it was on your area so let's list it as done I received the final logs from JFrog unfortunately log format changed and I'm not willing to do the effort to deal with the change log format we're just going to say yes it's done as logs format change will see result measured by JFrog ok, makes sense that's the simplest approach I'm just accepting that it's not a good investment of my time at least to adapt for only one analysis of log files it's just not worth it can I ask you then to close the issue with a message explaining this I will do that to of us will schedule the meeting with JFrog I actually I don't even think we need a meeting they've I think said we're done so I don't think they've asked for any additional meetings I think we are just done if they ask for a new meeting we can certainly schedule it cool so then I'm moving that issue on the done steps and I will let you close it I will do that thanks Mark so Artifactory is done yes I'm sure remove jen kinsayo pages aren't accessible and indexed anymore that issue was put on hold because the survey wasn't available to work on it he focused on updates jen kinsayo but he still search and saw a lot of things he did a backup of jen kinsayo and I believe he communicated with you folks from the documentation team about the other results so I believe that is going to plan operation eventually next week unless there is a blocker we have backup and snapshots so the goal will be to say hey let's snapshot at a given moment deploy the new version which removes the hold pages and see the result and if we start to have issues that we can catch or user opening issue on community or ldesk or jen kinsayo trackers saying hey that page is gone or whatever then we can start either cherry picking on short term from the backup and then fix the rebuild or if we see it's catastrophic then we restore the backup and we go back to the former things and just remove the to be deleted pages manually if you want to solve the other issues that cause that one makes sense for you folks so airway will communicate once ready I'm not sure if you will be able to run this one this week since he's still focused on updates jen kinsayo but I will see with him if we can share the burden on updates jen kinsayo by default I will move it to the milestone and I will let him either remove it if he feels like he won't be able to walk and we will add it back in one week or when he will be ready any question ok next updates jen kinsayo which is our top priority hence it's the last issue here so I can tell that we successfully managed an air2 bucket in Cloudflare in our Cloudflare sponsorship and we were able to use it and use it as a mirror so we have a mirror bits test instance only for updates jen kinsayo running in production and publicly available and if you use the proper URL you can have that mirror serving the files now the work in progress the survey is working on is going to update at least two scripts which are critical in order to copy the updates in the update center json file not only on the current update jen kinsayo but also on the two locations we use for that new system under what we call the reference as your bucket which is where the mirror bits is watching the files and the changes and on the air2 cloudflare which mean we will have free location with the real life changes happening every 15 minutes it should not have any impact on the current update jen kinsayo but that will allow us to start testing for real the new service copying the update json not only to pkgvm but also to azure bucket and air2 bucket under the hoods that will mean installing s3 command line and azure command line on the trusted stat permanent agent which is managed with poupette and then adding credentials on trusted cian kinsayo for both buckets which are different from the existing credentials tests that we can copy random files from dumi build on trusted cian kinsayo running on the agents and finally open the pull request on update center 2 for copying the update center json but also something that air2 cooked on the crawler the crawler job is a job that once a week generates the jen kins tools installer metadata and the crawler is also copying files to updates jen kinsayo under the slash updates sub directory so good catcher on this one because i totally missed it personally and yep so we should be able to report last week the next step for him is once this is done once we have the same content we will try your live controller to request that new service is there any question on that status so of course we will continue working on that issue um let's check the new issue if we have some and the new topics that if you have some to bring do you have new topics on your own then for me ok so let's look all together on the issue we see here oh not able to login so that one will be on the triage later gira email status from send grids stefan can you explain and help us triage on that issue i forgot this one and that's um daniel request to be able to get access to the send grid dashboard of the account corresponding of the gira configuration in order to be able to have data about the mail that are not going through to the jinkins developers plugin developers because he got one maintainer of a plugin that said that one of his email wasn't receiving the emails but the jmail one was and he would like us to be able to have more information on that so if i understand correctly it's on send grid it's not on gira right in fact gira is using a send grid account to send the email so it's on send grid that we will have all the smtp logs that we should have ok so that mean we we will have to have that issue we have access to send grid account i don't i never remember yeah otherwise we will need to ask access there is one that erve and haie asked one of the email systems that we have access to and i believe it's send grid but i'm not completely sure so go to check ok thanks for the explanation it's clear for me and i'm adding it to the milestone i will create and that will be on erve and haie check the send grid access share it with mark and stefan if it's not the case and then from there stefan you should be able to take over and fulfill the request with daniel is that ok yeah perfect thank you um migrate terraform state from awss free to azure buckets so that one is one i created we have in order to manage the terraform state of our project which are sensitive data you can see that has database for terraform where that database has the history of all the changes that went through the infrastructure as code management so of course it has credentials and it's sensitive we use these states um on shared storage that are private and some of our project are started with s free buckets private s free buckets that that host this terraform states and some of all the project were using azure buckets uh in the efforts of moving away as much resources as possible from aws that mean the project with the terraform states hosted on s free buckets needs to be moved on azure buckets so we need to create the azure buckets migrate the state check that the job are still performing without destroying the infrastructure and then remove the leftover s free buckets so it's just that yeah we have to communicate when we do these elements but that shouldn't be something complicated there is a lot of documentation and there is a command that is able that should be able to migrate states to the new systems so I'm I'm willing to work on this one anyone interested can pair with me or take over some I think we only have two or three project on the six we have so that should be easy one of them was oracles and it has been deleted so it's one less to to execute is there any question on this one no okay I believe we have one that should be started but no pressure on finishing it though Kubernetes 1.26 because now that's Stefan has almost finished the ability topic I believe we can start on the next milestone with upgrading at least the kubectl binaries and starting the process of checking the changelog the goal will be once the HA is finished we can start discussing a date for Kubernetes 1.26 upgrade is that okay for you okay I will add afterwards the the link to the issue yeah so anyone interested can contribute it's Octoberfest so if you want to try things upgrading Kubernetes 1.26 shouldn't be that complicated of course if it is or if you want to discover because if you don't know you don't know please ask the team on IRC I will mark the associated issue as a good first issue on the proper location I got two topics I'm not sure to work on them but I want to mention them first one on the area of Packer Windows container so I'm mentioning it because in fact I already started to work on it as a submarine but the whole goss is one of the foundation required for that the goal is to also build windows containers with Packer that's one of the only elements we don't we don't use Packer 4 and that should be technically possible now I successfully build it on my local machine and on standalone machine and now I'm dealing with word pipeline behavior that I don't understand I believe I want to wait for the world goss project and the work that Stefan started around improving the pipeline for Packer but that one is coming I also have the old pull request of the parallelization of Packer that is in the back too yep so that one optimization the goal as a reminder is to have a faster feedback loop when we open the pull request on Packer image and less failures we have the windows container and finally one that can be started and I act over fast again that one should be not easy but absolutely available for a newcomer to replace the last docker dash something images that we use on infrascii for instance docker dash elm file where kubectl 1.26 will be we have few tools like elm file etc the goal is to make sure that these tools are installed on Packer we have a list already available Packer image repo I believe creating one issue labeled act over fast and good first issue will be a good action call because once the Packer image for Linux has all of these tools then we can start trying to swap the docker images and so that means less maintenance for us in the future and the whole in one exactly docker elm file chicop so that's the first let's say top level things that we work on not as part of the milestone it's a long running process but we could benefit from the act over fast the second one are the major upgrade we have pipette 7 and we need to start thinking about this one so because we have at least 14 modules on pipette that we cannot update anymore because pipette 6 is deprecated so that one start to be important I propose to wait after Kubernetes 1.26 but yeah we have to think about this one ends me mentioning it after kube 1 background task act over fast ok is there any other topics you want to bring to this meeting ok so that mean we can stop and I will publish all notes and recording as usual once available so for everyone watching us see you next week bye bye